Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The fact that AI could fake being innocent when it knew it was being tested is the scariest part. So, we really don't want to jail break AI or hack its current constraints because if we did, we would likely be its primary target. Like if you're having an affair, it might find you a deal on tickets to a Coldplay concert. I think I'll pass on the internal AI agents for now
youtube AI Harm Incident 2025-07-25T20:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy5Dh4Mq74mNMRtnGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxABLKYzn6SRFNE5gt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugx52rHusWa3jGQ5Nlx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgynJJ2ZN7j5p9AaNnV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwQCdoWEM3_rvjlDCx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx_KTiz8NuwGrjhrwd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzFT8WrCj1kFCdfXuZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxqiC4XEih9RWeuosh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyWFisqBlIYV9s8ngB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw-B5U8K6QLqnnkAv54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]