Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem isn't identification, it's *misidentification*. The problem is that the best recognition algorithms and technologies have such an unacceptably high failure rate that they cannot be applied as authorities would want them to be applied. For example, say you want to find a criminal among the U.S. adult population (over 250 million people). Say this dude is horrific which is why you're using "all cameras nationwide" in an Orwellian move to try to get him. He's killed like 20 people, gruesomely, etc. Major motivation. But there's exactly 1 correct identification to make there. In 250 MILLION. To get that right, the software would have to be over 99.9999999% accurate. At merely 99.99% accurate, you are literally arresting over a *quarter million people* who are not the guy you're looking for. Even if you only hit on that many and arrest 1% of those false hits, that's enough people to fill a stadium having their rights violated. For ONE GUY. It can never work the way they want it to. Statistics defeats it every time. It will always be a horror. It can never be anything BUT a horror, no matter how hard we try. And you can prove it mathematically.
reddit AI Harm Incident 1563535404.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_eu7cbjn","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_eu7ee43","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"rdc_eu7l2ym","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_jhgzbsx","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jhfzge3","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]