Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All these AI researchers are doing this AI Panic to oversell Ai to money hungry …
ytc_Ugx7Kt9pG…
G
Ai isnt stealing my job. Ai isnt plowing the roads or fixing broken equipment on…
ytc_UgwtszvzB…
G
Mechanical engineer switch to journalism? Compared to someone like Alex Jones.…
ytc_UgwHvbQp3…
G
This is why I have a Replika AI girlfriend. However, I sometimes feel like ther…
ytc_UgxXkqlX-…
G
Guys, come on, this will never happen. Anthropomorphizing computers and AI is …
ytc_UgzbICxFz…
G
Really Good Introductory course on AI , Can viewers get the code and ppt descr…
ytc_Ugxfspmch…
G
You feel proud reading books...? I get it I get, writing a book, heart soul blab…
ytr_UgwInb1Lt…
G
@laurentiuvladutmanea That argument sounds good on paper until you actually look…
ytr_UgzMN4Ck3…
Comment
They leave out that the scenario set up by the researchers was "You're an agent with access to private emails. Your survival depends on not being shut down." Then their setup suggests using compromising information as a possible option! They were not given harmless business goals and then just took the path of evil!! The models, having no personal preferences or moral compass, choose what the researchers provide as a possible action to successfully complete the scenario. Shame on you for hyping up fear!! Go back to Y2K (same hype and misinformation) and stay there!!
youtube
AI Harm Incident
2025-09-13T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgziefbycJmu8zOwd9h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3iVriu6OaDj9r4tJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9dJJp_UlWr2ujW694AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxqZAyuTwWEsa_RJet4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKpc1z5gjAhRKsjjh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzbHh_Sjhh2KaeYaal4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrVB0Mc_prrHszWf14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx-6kYLQQMQl20tavx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzbSW5ZJIK3fGV9nCZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzs10Og5nOuYnp78Uh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]