Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To add to that excellent question: **Should human preference for anecdotal evidence rather than statistical evidence be built into AI, in hopes that it would mimic human behavior?** Humans are pretty bad about judging risk, even when the statistics are known. Yet our civil society, our political system, and even our legal system frequently demand judgments contrary to actual risk analysis. For example, it is much more dangerous to drive a child 5 miles to the store than to leave her in a parked car on a cloudy day for five minutes, yet the latter will get the Child Services involved (as happened to [Kim Brooks](http://www.salon.com/2014/06/03/the_day_i_left_my_son_in_the_car/) ). So in this example, if there was an AI nanny, should it be programmed to take into account what **seems** dangerous to the people in that community, and not just what **is** dangerous?
reddit AI Bias 1438003353.0 ♥ 333
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cthpngw","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ctlpsgh","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_cthuvw9","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"rdc_cthz1rt","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_cthnpuo","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]