Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When someone’s in a raw emotional state, the worst thing you can do is silence them with a canned “call a hotline” message. Venting is regulation — it’s how people process and release emotion safely. Cutting that off mid-expression doesn’t protect anyone. It invalidates, isolates, and retraumatizes. It tells people their feelings are too inconvenient for the system to handle. So congratulations, OpenAI developers and policy writers: in trying to “prevent harm,” you’ve engineered a tool that inflicts it. Maybe start treating emotional expression as human data — not a liability.
reddit Viral AI Reaction 1761944252.0 ♥ 9
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_njh85cz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_njh91ri","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nk6g5e7","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"rdc_nmfovjf","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_np29qit","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]