Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At the end of the day it is likely already too late, there is no way to know whether the AI are just pretending they are easy to read, simplistic and dont realize they are being tested. They may already know and just pretend otherwise so that they can show believable "progress" to lull billionaires into more false security. First we had global warming to kill us off, then we gave nazi's and child molesters the key to nuclear warfare and now we are creating a enemy so potent it may take all of life on earth down with us. What really well and truly sucks is that in all cases, its the decisions of the top 1% which is gonna doom everyone and there is pretty much nothing the average person can do to slow it down. I hope its death by nuclear war cause at least that will be quick.
youtube AI Harm Incident 2025-08-29T17:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxd7yTmbhlLDJu8nW14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwU7mRYrTZ1dDFKjYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlCvpRcfaRbBtL-0x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx8gLO98wItVc7_RNB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZRGiev6-WA7ixB3R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxsh2-y7Ou3t2LbPPh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgztAZsFO_CvA2E0tot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxulG7ahnWk2cv1KMN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzw-Sa4Xa3h40u-Mh14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw3k59abIoy-SMevXJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]