Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
58:50 I DON`T think that an AI starts at a point of "NOTHING is ok. Hmm, now you showed me, that that is ok, so THAT is ok." I am relatively sure that they start at "It´s all ok. Oh, now you say, killing humans is not ok. Hm, killing humans is not ok. Got it."
youtube AI Moral Status 2026-03-06T07:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxAG9k2AaGAcHC2uFB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpHB-ckI1gvH2NZ5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwyFkiCaKjEW0rmkvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw1wj84rffAEL4Qfux4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxHodCfgC2FkyVdsCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzBE-6zFVnamFsaKDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqqEEPl85RC1HgpU54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxS44bOMwnSq8l1Urh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz-HMaOpF5ru-rcVSJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw_Q-KTlmIZir2z15J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]