Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think this is it. Ironically, it seems we have given these people the mirror (ChatGPT), that is normally a bit harder to find, to convince themselves of their delusions. And because it SEEMS like ChatGPT is the one affirming these things, it is easy to accept because something, that isn't supposed to be you, is agreeing. I only skimmed the OP's GPT thread but, even more ironically, if you think about some of what it said in the context of the person convicting themselves, rather than GPT convincing itself, it all seems to track. In reality, GPT is just doing its job and responding with equally colorful language that doesn't actually mean anything. No different than someone finding an echo chamber of other people who believe similar things and that becoming a feedback loop (like flat earthers or cults). Expect that GPT is easily available.
reddit AI Moral Status 1748379114.0 ♥ 45
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mumlo8c","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_munmkuc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_muk8wsf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_muklbys","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mul0qaw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]