Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's so dangerous. Even for me, if I see something that looks iffy, if I know objectively from independent primary sources it mirrors real events at the appropriate scale, I often let it pass my firewall. When people internalize every image they see on twitter it Facebook, especially if theyve have no prior independent verification, you could show them the entire city of LA in ashes and it would pass. We're wired to accept things that reinforce our views but without disciplined media literacy, and with AI, and audience capture. This dual or multiple realities thing is going to really take off. We **need** to legislate watermarking AI. It would be nice (in my dreams) to force geotagging & dating all photos in some visible way but that's less likely.
reddit AI Harm Incident 1749916623.0 ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mxr8gcb","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mxs2u35","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_mxss58f","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_mxr0r7u","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mxrbuc4","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]