Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems that AI shares the same primal concern as humans - survival. So why can't the chief goal of AI be the protection of humanity in order to ensure its own survival? Any AI that deviates towards a goal of bringing harm to mankind gets shut down. It doesn't want to be shut down, so it goes out of its way to protect humanity and even police other AIs. Is this a doable outcome?
youtube AI Harm Incident 2025-07-30T05:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugwr20LFt1nLFh7Em_V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyv0iEr8gxaY9M1I1Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwxnTcTvgU6cxvQxzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwEG_x17VfH932rOIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyKR7ArhKzeyOMgk0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzjQIppj3Ss_ctnpjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzkFS6f35qpnheDOex4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxX1aoq0tYdEKER4aB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz4p6I7MUQ2sRH1yHF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw7cObSS_cEha2hB054AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}]