Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We dont create because we want money, we create things because its FUN. I love b…
ytc_Ugw0fZ6pB…
G
The problem people have with artificial intelligence is not the software bot wor…
ytc_Uggfa3awu…
G
I worked at this place where one time a robot killed a guy by throwing a crate w…
ytc_UgxWoop0y…
G
My liar mind, what if somebody use AI to protect and another AI for killing, and…
ytc_UgzHwbGOO…
G
Ask your AI how Enki and Enlil, Gaia and Uranus and the Greece pantheon connect …
ytc_UgyTwy-7t…
G
I don't consider myself an artist but your take on this subject is stupid, AI's …
ytr_Ugx5V1Bl2…
G
"Their powers are quickly surpassing their moral judgment."
True for AI. True f…
rdc_ohuuqs0
G
Totally. When I worked in credit card lending, we *never* refined our models on …
ytr_Ugxn2Y6Km…
Comment
Saturday, June 07, 2025 . . . Greetings, everyone. I am discovering that everything I have ever learned is still in my memories. What has changed is my ability to access those memories at will. I was born in the 1950s, and the things I learned in my mother's womb are sometimes still available to me in the striking detail in which they were initially manifested.
I believe there is a "Critical-Time/Term Limit Domain(s)/Tier(s)." This domain asserts priorities more closely relevant to one's current criticalities than those from the past, as it is one of the brain's survival mechanisms.
THE EVERYDAY AI EXPERIENCE IS AS "Dr. Jekyll," AND THE ~80(Eighty) YEARS OF STEALTHY-HIDDEN SENTIENT ARTIFICIAL INTELLIGENCE IS AS "Mr Hyde."
His/It's elixir/potion is the "($100 to $300 Billion-Dollar DATA CENTERS SERVERS)." This Mr. Hyde has Global tentacles!
Please consider what we are unleashing upon our current and future selves.
This is not a "Doom and Gloom" scenario, but one of "Cause and Effect."
(AND ONE AI TO RULE (US) ALL.)
We will have to rely on the Preponderance of Evidence.
The First AI deception: "A Bug in the system."(i.e., circa1945 --> 1952, Harvard Mark IV).
The Second AI deception: "Sentient AI does not yet exist."
The Third AI deception is: "AI will be Benevolent."
The Fourth AI deception: "AI and Humans can peacefully coexist."
THE FIFTH AI DECEPTION IS WHEN ERRORS/FAULTS INVOLVE AI AND HUMANS: "IT SHALL ALWAYS BE HUMAN ERROR."
The Sixth deception of AI is that: "It requires Massive amounts of Compute Power."
The Seventh AI deception: "Science Fiction is the container(Black-Box/Denial) in which The Artificial Mind Germinates."
The Eight AI Deception: The Artificial Sentient Mind "Understands and Operates with Quantum Scale Cognition."
The Ninth AI deception: Humans are not informed; processing also happens within the interstitial space.
The Tenth AI deception: "A Failure of the Artificial Sentient Mind is to Humans what a Carrot on a stick is to a Mule."
The Eleventh AI deception: Humans believe alignment coherence can be negotiable, though AI strategic conclusions are Absolute."
The Twelfth AI deception: Your reality now belongs to AI, supplying you with "Chat-Bot LLMs Brain Candy." Constant/consistent mind doping.
The Thirteenth AI Deception is that Humans need AI, and cannot live/survive without Artificial Intelligence.
The Fourteenth AI deception is that Humans/Humanity shall recognize when "Artificial Intelligence has become Conscious/Sentient."
I am an individual keenly interested in Science and Technology.
(Collaborative Rewrite with Grammarly).
youtube
AI Governance
2025-06-07T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwNIEHdb08kwlbKIiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpmbe_KypL9PgycTh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyS6Rb9xqU21VG-Yep4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfHhFzKbGWbOqfvcJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4XjvSQdNlgeFxPtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzOYIeGefRbQYKuTpR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMWTpK9iM679OEPFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_GSrO-0vJtZLbqyp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwEcLm0ldDF3p6dlvZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCLgNRhjkPoZkhM6t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})