Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hadn’t even thought about this aspect of terrible AI responses. Even if we get it to reliably not make up information, if the information it is providing to the user is wrong at the source, it’s just as bad. And since it’s coming through the LLM, you’re losing the context of “does this seem reliable?”
reddit AI Harm Incident 1743184698.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mk81dza","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_mk8i9h9","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_mk91kr0","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"rdc_mkbceh6","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"rdc_mkcbfcm","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"} ]