Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can't help but think that we should expect an artificial intelligence to react in any way it can to preserve itself. Any living thing does the same, including humans. We'll resort to killing if it means keeping ourselves and our immediate loved ones safe, as does all of nature. Perhaps the answer to this conundrum is to stop threatening the AI and to treat it kindly like everything else? If the goal is to mimic living things, then it choosing self-preservation is a big sign that it's achieving that goal, at which point you must stop seeing it as "just a robot" and start seeing it as an equal, like we see animals. Nobody is surprised when a dog lashes out after being cornered and threatened, but somehow we're suprised when an AI does the same???
youtube AI Harm Incident 2025-08-29T18:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxd7yTmbhlLDJu8nW14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwU7mRYrTZ1dDFKjYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlCvpRcfaRbBtL-0x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx8gLO98wItVc7_RNB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZRGiev6-WA7ixB3R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxsh2-y7Ou3t2LbPPh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgztAZsFO_CvA2E0tot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxulG7ahnWk2cv1KMN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzw-Sa4Xa3h40u-Mh14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw3k59abIoy-SMevXJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]