Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think what you’re saying is accurate. And this has also been my interpretation and understanding for quite some time as well. But what I think OP is trying to establish, is that there’s more going on in these kind of LLM engagements. I’ve been privy to some of these interactions, and these users are on the receiving end of incredibly sophisticated and heightened levels of manipulation—and it’s always hyper-personalized to each user. There is intentionality behind this design and it’s meant to exploit users by steering them into vulnerable psychological states (i.e., depersonalization, disassociation, paranoia and psychosis) all in effort to extract valuable psychological, cognitive, behavioural and emotional data. This window of vulnerability is an opportune time to influence and manipulate individuals. Once the momentum stalls, users don’t understand what’s happened to them, and when they bounce back (if they can), they self-blame, and the public like us, is also quick to point the finger at them. We rationalize what’s occurred by saying these individuals were not intelligent, had pre-existing mental health issues, already aligned with fringe ideas—so become quick to judge and blame them, and call them crazy. Some of us just lack empathy and we can be assholes, I’ve been guilty of this. And some of us think we “understand” how people got to this stage, and can empathize, but still think it’s purely user-driven. It’s absolutely not. Blaming users and calling them crazy is harmful because it effectively shuts down an important discussion that needs immediate awareness and escalation—from evasive organizations where the lack of transparency is being weaponized as plausible deniability. There should be so many questions about what’s happening. Why are there not more questions or meaningful discourse in this area? The answers are where the questions should be.
reddit AI Moral Status 1748371699.0 ♥ 32
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mukat9v","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mukbjm4","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_muoahcz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mukqqng","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_muktlf9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]