Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's always interesting to see people tell on themselves this way. Mistrust? Bitterness? Fear of judgement? Fear of being hurt? All excellent targets for treatment. What scares me isn't job security, it's the idea of people who are hurting, just reinforcing self-isolating cycles and harming themselves and others. But we've already seen this for ages, without AI. It's nothing new. I will say that I'm sure AI will play a role in my field at some point. I'm not bothered by it. I don't think that AI can replicate a therapeutic relationship - which, in my humanistic view, is the most valuable part of treatment for the most common issues - but I can imagine AI helping with things like intakes/diagnoses, treatment planning, progress tracking, or treatment fidelity. Obviously it could have a ton of utility in research as well. But we really need to iron out biases and gaps in "thinking" first.
reddit AI Bias 1682924584.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_ji4e9jj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jiek62q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jj3lg33","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_jj60hby","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_jj8f9jl","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]