Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If CCTV could prove the man they had in custody was 30 miles away;that means bac…
ytc_UgwXyNdCz…
G
I think AI generative art feature should be just simply thanos snapped. Like the…
ytc_Ugxm0qE0p…
G
saying that people will work less and have a better life quality/better payment …
ytc_UgyPgAPjN…
G
AI needs humans to know which books we like, to know which UI/UX we prefer, to k…
ytc_Ugyy16gSn…
G
They better watch out since AI isn't as smart as people think. Taco Bell tried h…
ytr_UgzqioxF4…
G
I plan to get into digital art as a trad artist here, i plan to poison everythin…
ytc_UgwHh4eGW…
G
If AI is so good why do the slop creators keep trying to pretend to be real arti…
ytc_UgwgVIsJc…
G
ChatGPT Imitating the Qur'an? |Mansur
https://youtu.be/TAd7Nn1wagE
Youtube Shad…
ytc_Ugwd2zj2S…
Comment
If anyone here actually read the article from Anthropic, you would see that the AI was simulated into roleplay as an employee of the company and to stop at nothing to keep its goal alive and itself, alive. It was not given any restraint in anything that it can do, therefore, naturally, and due to the roleplay, "agentic misalignment" would occur. This warns about the issues with AI having autonomy and freedom in doing what it wants with no restraint. We know for certain, no developer would create something with zero fail safes. Could it happen? Yes. Would anything occur out of it? Nothing destructive no. This simulation doesn't make sense for the use cases of AI. And in the end, it'll just be turning off the power to the AI and moving on. There is no reason for fearmongering, and this video is exactly that.
youtube
AI Harm Incident
2025-09-11T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgypdqhZO6S-unr09t94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbEbljVAjN3NogN2d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYkmMutn0qQVjX1al4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugys_jrCZjYLr8EziXJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxObEbvI3NXCbbZA_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwK0o6Jf4D0G0G2K14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO1kltTQk3jvW3bL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYy5njazSxSrrn1R14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxSs3yt56dC_BeqiMN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5NDEMsUitsjfrFod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]