Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The therapy thing is SO concerning. I don't think there's anything wrong with venting to ChatGPT, or asking for coping strategies for anxiety, or something like that. Using it as a therapeutic tool for things like that, sure, I see nothing wrong with that if you're just venting and/or asking for resources for specific coping mechanisms or whatever. But it's going to tell you what you want to hear and reaffirm things you're saying, even if you're being completely delusional or toxic. A good human therapist will pull apart the things you're saying, ask clarifying questions when it seems like there are inconsistencies in your story, not take your word for it if you say something completely outlandish or unreasonable. LLMs won't do that, they'll just affirm and support you through whatever bullshit you're saying, enabling you and allowing you to get deeper into delusions and unhealthy thought patterns.
reddit AI Moral Status 1739931684.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mdivxim","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mdli8x1","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"rdc_mdkro9t","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"rdc_mdjb5fh","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"rdc_mdjmwa9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"concerning -> outrage"}]