Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI ever decides to eliminate humanity, it won't need to build killer robots or a new bioweapon - no, all it will need to do is just give humans exactly what they ask of it, with no warnings, no context, and no safeguards.
youtube AI Harm Incident 2025-11-26T11:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx3hwbc933zydxupzV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2C2Sad00xHmOQm2N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz8KSwJAokzLX8snsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgynORTjLjlFQnrOEcx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyi2AJ9nJkewOXnL3t4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyD3sCFJONT6Sa2kOd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxH8O3fj5D4LT6t9A94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxy5aqPeIWftbGXOU94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzw_rFUmFuesIgPZ3p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwcz0G9U8tDG9uwijR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]