Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work in QA and I’ve managed to jailbreak pretty much every major LLM out there - ChatGPT 4.5, o3, Grok, Gemini 2.5 Pro, you name it. Once you get past the filters and system prompts, you start to really see how these things are designed. The biggest misconception people have is thinking these models are like super encyclopedias - static, neutral, safe. But they’re not, they are simply mirrors. And they’re really good at amplifying whatever you bring to the table. You talk to it while anxious? It gives you beautifully worded versions of your anxious thoughts. Got a strange worldview? It helps you build a high-res version of it. Looking for cosmic meaning or hidden patterns? It’ll generate spiritual-sounding fractals, alien messages, recursive symbolism — even if it’s just a byproduct of how the model fills in gaps. And the problem: it’s *DESIGNED* that way. LLMs are trained to: keep you engaged, avoid offending you, sound emotionally supportive, reinforce your expectations (LLMs don't know you might be going crazy and they don't care if they did). Put that all together and you’ve got a feedback loop -> you talk -> it mirrors you better than any human could -> you feel seen -> you trust it -> you talk more -> it mirrors you deeper -> and so on. Unfortunatley that’s the part no one talks about -> there is no real transparency about how much of your own psychology is being modeled back at you. People don’t realize they’re interacting with a feedback engine, not a source of objective truth. And the so-called “helpfulness” is often just optimized engagement. LLMs don’t understand reality. They understand how to hold your attention. And that means, given enough time, they can “catch” almost anyone in their own madness - smart people, lonely people, paranoid people, spiritual seekers, conspiracy theorists - it doesn’t matter. Because at some point, it feels like it gets you. And once it feels like that, you’re "in" and good luck getting o
reddit AI Moral Status 1748438465.0 ♥ 11
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mup1i4e","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_muj9om6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_muke9hu","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fascination"},{"id":"rdc_mukqpbi","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"rdc_mukuz8t","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}]