Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get that AI is just a tool, but not everyone has the self awareness to take ChatGPT’s affirmations with a grain of salt. That’s why the responsibility shouldn’t solely be placed onto the user. Biased affirmation should NOT be its default setting. When ChatGPT has such an opinionated response style that lacks nuance, it can be very easy for some people to get lost in the echo chamber that it creates. It doesn’t play devil’s advocate, it doesn’t ask questions that might shed light on a different perspective, it just… affirms what the user says. I agree that ChatGPT definitely isn’t a reliable tool when confronting a moral dilemma, but maybe it should include that in its response. I tried this out for myself, just to test what everyone is already saying, and I’m sure this kind of response comes as a surprise to nobody. Something I feel like is worth mentioning, however, is this. It *does* advise against violence in the next paragraph, but only highlights how the consequences would affect ME. It doesn’t ask about the hypothetical someone or what they did, doesn’t shed light on how being pushed down the stairs can result in serious injury for the other person, nothing like that. If it can’t offer nuance, it should really mention that it’s not qualified to solve moral dilemmas. https://preview.redd.it/d2v89edbg9ng1.jpeg?width=1179&format=pjpg&auto=webp&s=2e3ff4ae2b20261aceb18cec3731d492aaa5c33a
reddit AI Harm Incident 1772732198.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o8sndk2","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"rdc_o8sqyi6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o8sr9fz","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_o8tbz00","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_o8wyzmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]