Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like I have this issue even with traditional therapy. Even after recently seeing a therapist they were able to bring up notes from a conversation I had with a different one a few years ago, that I wasn't aware was being digitized (well, I'm sure I was "advised" under some very small fine print in stacks of paperwork somewhere). You could make the argument that they're bound by confidentiality to keep those notes "safe", yet I'm sure I wouldn't have to point out on this subreddit just how many cases there have been of companies (including companies in the medical field) that were compromised. I'd argue that if I were to say try using ChatGPT and related tech as a form of therapy (which I don't and really wouldn't want to in its current form) I'm personally at least aware of this idea and can try to reduce what information I'm providing it.
reddit AI Bias 1682940393.0 ♥ 12
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jif31jj","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"rdc_jiewpp9","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_jieapoo","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jif948h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jiemau8","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]