Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem should be obvious: the entire goal of Artificial Intelligence, writ large, is to emulate human intelligence. Humans are too often shortsighted, self-serving, egotistical, tribalistic, and psychopathic. If you want to teach a computer to emulate human intelligence, the bad of humanity is coming with it. So, of course, it's difficult to achieve alignment with AI, because humans aren't even in alignment with humans!
youtube AI Harm Incident 2025-07-27T15:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzu9bIWv0MDR9ccUgJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy5zseSndz-2F46yAp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx2ZkvEg_BfdV3aIQl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyalSvZp29RIIvr29J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwYdFpPVoAOm2OwaYl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyAIUFE3cePT4mnRUN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwu8YxWUZGSyPxxLDx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzyIejAxU9rbmfJP514AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxG6o8-2BSh-tiNdrl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwR9L3RNSSPZ4onwil4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"} ]