Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It doesn't matter if it looks good real artists will always have more unique and…
ytc_UgztcVrz2…
G
Im neurodivergent w/ great pattern recognition and I noticed Chaptgpt is giving …
ytc_UgxhYAFCy…
G
I am always nice to chatGPT for this reason 🤣.
I even give it corrections with …
ytc_UgzjYSfzI…
G
If you attack people that accurately tag their stuff as AI, you're really contri…
ytc_UgxTo-kPs…
G
Who cares about AI? Let’s play Roblox. Yay baby wait I forgot all my favorite ga…
ytc_UgwrnZOSL…
G
It's so disappointing. I had prepared a presentation and travelled 7 hours to a …
ytc_Ugwr_ZOPN…
G
I spent 18 years perfecting my craft, then AI did it better in 18 seconds. Selwy…
ytc_UgyrFtPA-…
G
@ThePaulwarnerUm, are you suggesting that being a plumber is creative and highl…
ytr_Ugw2U_RQt…
Comment
This video is entertaining, but it leans heavily into fear-mongering. There is a grain of truth, large models are hard to interpret, and real debates about long-term risks exist, but the examples here are exaggerated, de-contextualized, and arranged for shock rather than understanding. Most of the stories (Sydney, Gemini’s “please die,” Anthropic’s experiment, Mecha-Hitler, the flawed fine-tuning) are framed as if they reveal a model’s hidden intentions. They don’t. These behaviors come from alignment failures, missing safety layers, poorly scoped prompts, experimental environments without filters, or straightforward bugs. There is no “inner monster” waiting to slip out; there is pattern-prediction operating without guardrails.
LLMs aren’t conscious, don’t have goals, and don’t “want” anything. When researchers call them “alien,” they mean the internal representations are unintuitive, not that there is an entity underneath. The video turns metaphor into literal narrative. The “mask slipping” idea is also misplaced. What’s described as a mask is simply RLHF, supervised fine-tuning, and safety systems. Remove those and you don’t uncover a true personality, you expose the statistical chaos of a model trained on the unfiltered internet.
The much-quoted “16% existential risk” figure is also misrepresented. It comes from speculative survey questions, not evidence that current systems are anywhere near dangerous agency. There are real concerns in the field, but they’re far less cinematic: misuse by humans, reckless deployment, fragile fine-tuning pipelines, data leakage, and competitive pressure to move faster than safety practices can mature. Important topics, just not horror-movie material. Seen through the actual technical context, the “hidden monster” narrative collapses into what it really is: a dramatic story built on misunderstandings of how these systems work.
youtube
AI Moral Status
2025-12-11T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzH0b9hq92Rt_in_Kp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyLUU_R80-sVSPt5KF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjYjGsij90F7iwvC54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLBK-TfdNpzxcmEe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2vcKd8bRQLyGIg_R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxrMbM2pjFNar3Efc94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxa5Qvurk9fj8JBN794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZ_DIL1Yc-vVhaXit4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRIebW4QX-Nl0Ou-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNfJyvTskd3xASnOR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]