Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The technical concern is real but I think the framing misses where the actual vulnerability sits. The question isn't whether AI can mimic human behavior at scale — it already can. The harder question is: where have we deliberately placed human judgment in the information loop, and at what stage does that judgment actually matter? Most democratic institutions are designed with the assumption that persuasion is slow and coordination is expensive. Those two constraints are now effectively gone. The countermeasure isn't better content detection — you can't reliably detect convincingly human behavior at the source. It's designing accountability into the systems that amplify and distribute information, not the content itself. The analogy that seems apt: we didn't solve email spam by identifying every spammer. We built reputation systems, delivery gates, and filtering infrastructure. The same architectural thinking applies here. The intervention point is the distribution layer, not the generation layer.
reddit Viral AI Reaction 1777072773.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_oi3ron9","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oi3vbw5","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_oi40bfc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_oi43853","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_oi462je","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]