Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Obviously, they do not know how AI works. AI requires human input So the machine…
ytc_UgwmovgBz…
G
For his comments around 20mins
Innovate and make our theough everything includin…
ytc_Ugwwoz0ee…
G
im soo pissed with people who ussed the abalism argument i tink its more abalist…
ytr_UgySS4FvO…
G
Actually my mom and I almost got hit by a Waymo car. My mom was driving into the…
ytc_UgxAOm1wb…
G
@SoftlyOrdinarywell if there is no work then there will be no money. The current…
ytr_UgzJ9d2ht…
G
i don't know how they can't see how ridiculous the argument they're making is. s…
ytc_UgwxeLd5b…
G
Asimov did not in fact know how to build an AI. He wrote stories about robots an…
ytr_UgwN-qYvn…
G
"I made that ai young man and I aint letting u mess with it" - 🧓🏻…
ytc_UgxpSCuZA…
Comment
This video makes a basic category error and then builds a horror story on top of it.
LLMs don’t have intent, goals, beliefs, or a “true nature.” They’re probability models over language. When alignment constraints are weakened or broken, you don’t “reveal the monster underneath,” you expose parts of the output space that were previously suppressed. That’s not a mask slipping, it’s a filter failing.
The bad-code fine-tuning example is especially misframed. Degrading representations can knock out alignment features as collateral damage. That doesn’t mean the model was secretly antisemitic or genocidal. It means alignment is coupled and fragile. This is a known ML problem, not cosmic horror.
The shoggoth meme is doing emotional work here, not explanatory work. It replaces engineering reality with Lovecraft vibes and encourages people to project agency where none exists. The real risk isn’t an “alien mind inside the machine,” it’s humans mythologizing tools instead of understanding their failure modes.
youtube
AI Moral Status
2025-12-16T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwe8SeMOU0SFcby49p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzB45xugpNDG0Rexn94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdjrbkRm4n9QlhuLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNFHMeR8VA0AZNIS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwdbizx8U4RC2lEA5J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzwF1UKeQ2X9Mq_d_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwyT013V4Be3OifIL94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1B_-QphtgUrlrU394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwmYUYCDVjI5KSK_Vl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-Xe9R_F7y80-WWm54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}]