Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>How would you like to be at work and the system thinks you're a guy that shot up a church. Cops arrest you at work and you lose your job until you can prove otherwise. Or the cops just shoot you because you moved wrong. The cops will lean on the system to do the investigating. Instead of a solid lead, just wait till it finds a face. The same issue can happen today with humans. A human misidentifies you from security footage and photos and the cops are called and you get arrested. The problem isn't facial recognition; it's what you do with it. Free speech has its issues too. You have fake news, people spreading lies, slander, etc. The solution isn't to BAN free speech but rather regulate it in a way like we do today. That's why we have libel and slander laws for instance.
reddit AI Harm Incident 1557889073.0 ♥ 49
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_enk0pxg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_enjv1pz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_enjp76w","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"rdc_enjqw84","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_enj3y38","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]