Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I used to work in research ethics and raised the alarm about the possible negative effects. I didn’t expect exactly this, but I knew better than to allow researchers to expose people to ChatGPT for mental healthcare without a professional present and other safeguards. Unfortunately, I wouldn’t be surprised if a lot of this is probably unethical research in itself.  What is most disturbing to me is the narcissistic word salad patterns it uses. It’s the same that cult leaders and people falsely trying to convince others they know what they’re doing use. Critical thinking, literacy, and media literacy is extremely low and falling further fast, so people are more susceptible to believing something committed to telling them what they want to hear, even if it’s false, than someone admitting to not knowing something or telling them something they don’t want to hear even if it’s the truth.
reddit AI Moral Status 1748374807.0 ♥ 16
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mulpbel","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mukdmud","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_mukllx9","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_mukt1un","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mup2ayc","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]