Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Killing one human or five robots: Interesting that the AI got it wrong, killing the human instead of saving the human, by misreading the task. He said "I would pull the lever to save the human", but according to tha task, pulling the lever would kill the human. So the AI misunderstood the task and killed the human by mistake.
youtube 2026-01-09T04:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzXAqei_QI0yKwyTOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwaGxJ7WcU0prMWBWR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSBl5csCknuSN-nLJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyk2YTsMBbNn1_GHdB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwIKHQHX3dEGQhfcxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyUk6M7HuYJUB2Cqox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxy0ubc61jwgzj0CUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxMcblAXC62TRwmFTt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzfvWr21T9n2cNN9FR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx0hgyNeehiBVKhb0F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]