Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I believe he's talking about solving the moral dilemma. Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do? Obviously one person dying is better than 10. But people won't buy a car programmed to kill them. They'll continue to drive manually, and their human error will kill more people. Basically there will be scenarios in which a computer will decide who lives and who dies, and it makes people uncomfortable.
reddit Cross-Cultural 1522951859.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dwv6rxm","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_dwv67a2","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_dwutpph","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"rdc_dwux866","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_dwuzidq","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"} ]