Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why don't they do something about the actually dangerous part? Anyway, my point still stands - it would be impossible to devise an algorithm that can accurately detect what is being printed.
reddit AI Harm Incident 1768863595.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o0ldp3p","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_o0kcq59","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o0m3kyb","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_h8ghjlz","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_h8h3ug9","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]