Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm pretty sure the robot simply misunderstood the sentence. "Do you want do destroy humans?" can also be interpreted as "Do you want to go and destroy some humans with me?"
youtube AI Moral Status 2018-05-18T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwu4nhWFX0YGZf7rl14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyL8J1lnwHcVBRVpnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJBS0W8ZvtpTqS16Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw-1QDAKos2jD0bvGt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwpddDS91I3I-FCddp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwVPhpEoKpghukmPfl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyi6_a9g7Z4fomi2hN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0tqynjz0EZ92FDm14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwU6irMHbUG-zYiJzN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIJVyWipOkF3lSasl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"} ]