Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
9:22 does that imply, that when agents meet for the first time, they check if their goals align? What if they don't? Won't they try to end each other to ensure, that there won't be interference in trying to reach their goals? WWIII could be AI robots from meta fighting X robots and OpenAI. That would be a hell of a tie to be alive, tho.
youtube AI Harm Incident 2025-09-22T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzGNARMuqDogRF6fUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyBelMTfB6p9ug2Bx14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFb0SHkBh5HLNUrzF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugxumbh0zt-rHabnmzB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyUK8ayVsVwZSQPhXh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvN8gS0KC81gULdSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzcq5LzC9QmfYm0Usl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzQOS0qOQ_qZTc_art4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxxVwOc5K0Y6OrK1a94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy8-4MtYJgFuMY9h994AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]