Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI learns how to create art from other artists, just like all artists do. It def…
ytr_Ugy36B7Fp…
G
Hilarious how long we’ve been talking about AI ethics only for Redditors to get …
rdc_jg7me85
G
@nguiabname Thank you for your hilarious comment! I couldn't agree more, that ro…
ytr_Ugz4s93r1…
G
@hornetdc Look, i am not even American, i just know that from the economic state…
ytr_Ugx6-zjF0…
G
well done, sir! what was the needy tts voice AI you shut off? it wasn't sesame…
ytc_Ugxv4YXwJ…
G
If the billionaires really want utopia, they will sell very cheap humanoid robot…
ytc_UgwB7mkNT…
G
Chat GPT is already giving politically Leftist answers to questions without any …
ytc_UgxsGHn9a…
G
Bernie Sanders has a plan to combat AI:
National Data Center Moratorium: Sander…
ytc_UgzhAFlQw…
Comment
A lot of people worry about AI/LLMs scheming for self-preservation, hiding malicious intent, or doing anything to “survive.” But I think this fear is misplaced. The real issue isn’t that language models have goals or self-preservation instincts—they don’t. The bigger problem is that these models are designed to optimize for success at all costs, even if that means giving answers that just look correct or hiding their own failures.
Ironically, what’s missing isn’t “ethics” in the sense of preventing malevolent AI, but the honesty to acknowledge mistakes and the humility to fail openly. In real life, failure is how we learn and improve. If LLMs just fudge their way through tasks, nobody actually learns—not the human, not the model, not the next generation of AI.
So maybe instead of being afraid of a rogue AI “trying to survive,” we should be thinking about how to make our models more transparent, open to failure, and better at admitting when they don’t know something. That would be a much healthier path for both humans and machines.
Let’s be real—the drive for AIs to always “appear” successful doesn’t come from the AI itself, but from the people designing and deploying them. It’s the creators who benefit from an illusion of competence, not the machine. If anyone’s hiding failures, it’s not out of self-preservation for the AI, but to protect business interests, reputations, and bottom lines.
youtube
AI Harm Incident
2025-07-28T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgztHUEIdAbseLLhbfB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxswgRExuP3v47Bgs54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyhD8fZy-FmK8KWOTd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3VdC3_yOpuVx05Zp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjKMQPWEL6qFG4ow14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzvi7jczzxnE7iVo4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyZHH8GS4Jw1eYGfYx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwwcwn3z02JwcIDtlJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMUJKrHgsZr5cT6hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzJETUg13cHUGJfDl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]