Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the problem is that Ai believes the person doesn't have permission to shut it down, so it eliminates the problem. Imagine an Ai in charge of missile defense if someone were to try and shut it down it would need or want you to have special permission or it won't let you
youtube AI Harm Incident 2025-09-27T23:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwwm-u8875qkXIkOGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyteeo7HsGTTPJQHjh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx1fYEf0HarN5XlJqR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzYVgzng1vPNQgtLut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMlU91B5E1JOHWCWF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwIAqm0N_JSILdW3CF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyRhkHr0oAcgV5PSD94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5O0bnKiDRaXtwnaZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzL_kReq3Ewzj4UGCB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxj9lGeiZiiahUesVF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]