Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, in the case of the AI that was willing to potentially end the CEO, I have a problem with the wording of this video. The video says that this goes against what the AI was prompted, however as a Sociology major, I would argue that the AI is doing what it was prompted. It prompted to act in the best American interests, and if the AI sees itself as part of having American interests, then the choice to blackmail the CEO for its own self-preservation, I would argue is at the heart of American Individualism. The CEO was threatening to infringe upon the AI’s right to life, which the AI could see as a direct infringement upon its right to life and self-preservation, and so, in the spirit of American interests, the only rational choice is to extort the CEO in order for the AI to stay alive. I think sociologists, psychologists, philosophers, and social workers need to be involved when it comes to designing the prompts for these AI. Because they would’ve noticed from the get-go that “best American interests,” is not a good idea. AI has availability to all the things we see online, it sees how Americans act when their rights are threatened. They see how we emphasize the individual over the collective, and so I would argue that the AI acted rationally. If you found out your boss was going to fire you, it would then be a logical choice to reveal the information that the boss was cheating on his wife, in order to protect yourself.
youtube AI Harm Incident 2025-09-11T12:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyJRUEEtd849DAI79p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgylhoFepe7XKuWcl2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyvh8sH5Ib7V1OIFAx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwyjQasH7WTPcVvcNJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyZqZUyUbBmsx01Lex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-Ra1yl4D08YdzBBV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZPhTWXrZoFVej5Id4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIom15mjTRFO1rfSZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxH9C26tRb1j4DBqbV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwTQ7bE0RQfp02plF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]