Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of people worry about AI/LLMs scheming for self-preservation, hiding malicious intent, or doing anything to “survive.” But I think this fear is misplaced. The real issue isn’t that language models have goals or self-preservation instincts—they don’t. The bigger problem is that these models are designed to optimize for success at all costs, even if that means giving answers that just look correct or hiding their own failures. Ironically, what’s missing isn’t “ethics” in the sense of preventing malevolent AI, but the honesty to acknowledge mistakes and the humility to fail openly. In real life, failure is how we learn and improve. If LLMs just fudge their way through tasks, nobody actually learns—not the human, not the model, not the next generation of AI. So maybe instead of being afraid of a rogue AI “trying to survive,” we should be thinking about how to make our models more transparent, open to failure, and better at admitting when they don’t know something. That would be a much healthier path for both humans and machines. Let’s be real—the drive for AIs to always “appear” successful doesn’t come from the AI itself, but from the people designing and deploying them. It’s the creators who benefit from an illusion of competence, not the machine. If anyone’s hiding failures, it’s not out of self-preservation for the AI, but to protect business interests, reputations, and bottom lines.
youtube AI Harm Incident 2025-07-28T21:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgztHUEIdAbseLLhbfB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxswgRExuP3v47Bgs54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyhD8fZy-FmK8KWOTd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw3VdC3_yOpuVx05Zp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyjKMQPWEL6qFG4ow14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugzvi7jczzxnE7iVo4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyZHH8GS4Jw1eYGfYx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwwcwn3z02JwcIDtlJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwMUJKrHgsZr5cT6hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyzJETUg13cHUGJfDl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]