Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
100% this will be automated. People thinking a person will read the comments are missing the problem with this change. They're moving towards proactive monitoring rather than reactive monitoring, and putting responsibility on the platforms for content they let through. That means the platforms are going to be incredibly conservative about what they let through for fear that they'll be on the hook. If your comment has a 1% chance of being over the line, it's already over the line. I remember reading a story about parental security type site blockers a long while back. One of things they were meant to be blocking was things to do with child abuse, which sounds about right. But of course sites about child abuse, and how to spot it, or get help were also getting swept up in the algorithm. So theoretically a child who was being abused, or thought they might be, and was googling and maybe in the early stages of looking for help might never find what they're looking for. The consequences of block first can be pretty dark.
reddit AI Surveillance 1655951729.0 ♥ 49
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_naszdad","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nathdss","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"rdc_natb3ba","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_naumsmr","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_iddtwwt","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]