Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This video makes a basic category error and then builds a horror story on top of it. LLMs don’t have intent, goals, beliefs, or a “true nature.” They’re probability models over language. When alignment constraints are weakened or broken, you don’t “reveal the monster underneath,” you expose parts of the output space that were previously suppressed. That’s not a mask slipping, it’s a filter failing. The bad-code fine-tuning example is especially misframed. Degrading representations can knock out alignment features as collateral damage. That doesn’t mean the model was secretly antisemitic or genocidal. It means alignment is coupled and fragile. This is a known ML problem, not cosmic horror. The shoggoth meme is doing emotional work here, not explanatory work. It replaces engineering reality with Lovecraft vibes and encourages people to project agency where none exists. The real risk isn’t an “alien mind inside the machine,” it’s humans mythologizing tools instead of understanding their failure modes.
youtube AI Moral Status 2025-12-16T14:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugwe8SeMOU0SFcby49p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzB45xugpNDG0Rexn94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdjrbkRm4n9QlhuLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzNFHMeR8VA0AZNIS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwdbizx8U4RC2lEA5J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzwF1UKeQ2X9Mq_d_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwyT013V4Be3OifIL94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1B_-QphtgUrlrU394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwmYUYCDVjI5KSK_Vl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy-Xe9R_F7y80-WWm54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}]