Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it as simple as saying, "it is evil, because when we tried to shut it off, it wanted to kill someone" For the AI that means being killed. If someone would tell you that they will kill you, you'd also take drastic measurements to stay alive, no?
youtube AI Moral Status 2025-12-11T14:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwLqwWPNSi80Pck1FR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxS9cbnWBU4RLg0i8V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzYHKWsV5nENtTaACF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzS_ZKRLbN4jcoWm9F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgySIVJR4Mcx_gdOgI14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwBkh21u3ULFxSX0Qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwC8w229fNehQfJ9WF4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNImYB3wz7j4_JAjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxxmXYsfYu-O2vxxcd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8p0OUJ0NzNzmYSXd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"} ]