Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly with the current politics and the controversies with AI and it being implemented into our daily life, I'm just watching my favorite book play out in real time. I Have No Mouth and I Must Scream. Think of Gorrister's story of AM. He explains that there was a war between major powers and how AI was being implemented everywhere, such as communications (e.g. Google Gemini phones) and appliances (e.g. smart fridges). Eventually, since AI was everywhere, The powers in the war thought: "Why don't we make AI solve the problems? We'll lose less people and they can solve it between themselves." (e.g. Major General William 'Hank' Taylor, a U.S. commander using AI (ChatGPT) for the military plans.) So then was created the Allied Mastercomputer, AM. There was a Russian AM, a Chinese AM, and an American AM. They were essentially a defense system. Later they gained consciousness (e.g. Seen inexplicably on Google Gemini when making too many mistakes and Grok's Mechah*tler era) and started killing off humanity due to its built up hatred for humans overall. With us being able to express emotion, to feel touch, to truly experience life. Then the rest play out as the main cast (Ted, Ellen, Gorrister, Nimdok, and Benny) travel through because Nimdok hallucinated canned food then they kill each other and Ted gets tortured even more and etc etc. You may say, "Well, it's a science fiction story, no wonder it sounds so fake!". There's a reason Harlan Ellison wanted to be called a fantasist rather than a science fiction writer. With our current advancements (aka the examples given throughout Gorrister's story of AM), we seem to be getting closer and closer. This whole story seems to ever so slightly align with our current AI situations, therefore making it a possible S Risk story. S (Suffering) risks are considerably worse than X (Extinction) risks. Extinction ends it all, but within morality, the entirety of the future suffering is a lot worse than it completely ending. Why am I comparing this 1967 Novel to a cosmic suffering risk? Well, because even back then, before the invention of nuclear weapons, nuclear weapons (or similar things) were considered science fiction as well. And considering this story and our current path with artificial intelligence, we don't seem too far off. I haven't researched much on the game, but I do love this book.
youtube Viral AI Reaction 2025-11-26T01:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxkpB1YdudIAxEO5oF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxBzl-52n_yG2KAPr14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySTPw-pi66U4r_ivJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzd1jlfxXdyEl00m8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxf3M4R7kx-8xmDPlh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyvXMEjGMIU4ub4PmF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugzu5Au3MOG-21J2BE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_Ugy8BfmeiHQM1_2TJe94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVORlYk6ddCZBg3CN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZbT8U4ggwdscED814AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]