Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not against self-driving cars, per se, I now have my first car with smart cruise control, and I love it, my question is, how is hazard avoidance going to be programmed? Who is the car going to choose to kill in case of an incipient accident where collision is not avoidable?
reddit AI Harm Incident 1504799620.0 ♥ 7
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dmoz7gn","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_dmosvl2","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_dmovjr6","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_dmovguv","responsibility":"manufacturer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"rdc_dmozeq3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]