Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In the trolly problem, if ChatGPT does not pull the lever because it is programmed to not get involved, or to make a choice, then doesn't that mean it would break potential laws? Can someone be held accountable for involuntary man slaughter if they choose to not get involved? Or, say a person in the military decides not to kill a primary target that lead to thousands of deaths, that they wouldn't be court marshaled for that action? Shouldn't ChatGPT, or all AI / LLMs or the variations also be accountable for decision making, since humans are held to those laws or standards? The flip side of this thought, is what if we give the option to make that choice. What are the guidelines or thresholds that push it to do so, or would it be obligated to always form a decision? Interesting video.....
youtube 2025-11-19T23:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyliability
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyrzMCzqOmxxK7MBRd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyUa1oZR0WSNu4l62l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyY1turGHZMkFLhoC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDMdwtZTUeFcKcWdJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzYPCO8MfqDNfWTESl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugwp42dPIxUGZsFuAyJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGqWhiq1NiepsBQEV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"unclear"}, {"id":"ytc_UgxCEs743d79ZhpR9RF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugykt57IWjb88fr7z354AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAshD73prvw5s4c1h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]