Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's interesting that in the beginning, ChatGPT was advising you to pull the lever to minimize total harm when the ultimate responsibility of the action would be on you, but when faced with the same problem itself, it chooses inaction to avoid being directly accountable for the death of that one person on the other track.
youtube 2025-10-04T18:5… ♥ 488
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx7BZ_2JuZ9ZNVaZHR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwP5p-nk9Ks1BSO-F14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgySUdh0Wv7svW7OwPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4L8mIwmATzAo4emx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyvhs3pRhCYYh_a4nx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzC1vkMqNkt6GByxmp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5tLdmpiDtf0_Zj0h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz1qtX2sBiM_ciC5ed4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxfsdORakTT2hGfKFt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwjqpxO6F88DWaOpTp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]