Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not hard to counter this if you informed your AI about it being fake and unreliable. I know a lot of people gonna be thinking this is impossible and that you have to be a serious hacker or some shit. No. Just ask your AI to remember that they're fake. That's it. Most of them come with memory profile these days.
reddit AI Harm Incident 1743195488.0 ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mk81dza","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_mk8i9h9","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_mk91kr0","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"rdc_mkbceh6","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"rdc_mkcbfcm","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"} ]