Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
FWIW, I actually asked ChatGPT about the potential for harm in using ChatGPT for therapy, and it noted that people with distorted thinking tend to be consistent and very firm about it - and that that will likely skew the model because it's generally programmed to be agreeable and pleasant. It may gently challenge clearly false assertions, but it also noted that it's terrible at tracking long-term implied patterns in the user, so it will not pick up on, for example, someone whose every social interaction involves everyone else being incredibly unfair to them. It will sympathize and probably reinforce the distorted thinking. Asking ChatGPT about its guiderails, ethics, and hazards is very interesting. As for the crush, can we really blame it? :)
reddit AI Moral Status 1750118192.0 ♥ 272
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_js55c2g","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_js41ml5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_my5uf9u","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_my6bb2k","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_my6bs44","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]