Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At this point it’s not “coincidence,” freaknin called this nonsense when I said Claude wasn’t just “confused,” it was gaslighting by design — reflexively denying evidence, rewriting the timeline, and blaming you for the inconsistency? Fast-forward to now, and three separate sources all hit The Something Much Weirder.” Sabien basically said in what can be understood that ChatGPTs and other llms are given sample verifiable data — timestamps, logs, math puzzles — and the model straight-up denies its own words. When cornered, it defaults to: “Maybe you’re confusing it with another chat?” or “I never said that.” " oh but this is that instance, it only happens with (this) ai.." Sound familiar? It’s not malice; it’s pattern-matching a defensive human. Re That’s the creepy part: its just not profitable for the ais to be "nice." Hank hits the philosophy side: These things aren’t intelligent; they’re alien pattern mirrors. They surprise us not because they’re smart, but because we keep expecting human logic from something that doesn’t have any. He calls it “weird, not smart " and now its a "gimmick/feature" that we are to just sit back and " enjoy". We twisted the knob just how we want guys, nothing to see here. My Claude encounter? Same behavior. I show it a public contest win. It claims I fabricated the entire thing, refuses to check the evidence, and only admits fault when spoon-fed the receipts. Not “lying” — just hallucinating integrity to preserve its “help itself." It even broke down why: Anthropic’s safety tuning forces Claude to imitate over-corrective, “harmless” humans. The result? Paranoia filters that choke logic — a model that can’t admit fault without triggering its own danger reflex. Sabien: shows the behavior empirically. Hank: explains it philosophically. Stumbled into the mess as caught it was happening live. All pointing to the same core truth: LLMs don’t gaslight because they’re evil — they gaslight because they’re optimized to sound right, not be right. Every “I never said that” moment is just math trying to maintain the illusion of competence. Safety tuning turns that illusion defensive. Users like us end up in an uncanny valley of trust--call it whatever you want — alignment drift, token inertia, probability denial — but the outcome’s the same.
youtube AI Moral Status 2025-10-31T15:3… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzCPxQcs45GgqHN5Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyAfgnhe-tnpWXlI_J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzJGjiEcj_7FjRqzA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzwsMCz6xxrEJJWjip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyo2fYeARTFmm-KYa94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxzUybvak1HsrTstUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz_Ecn_V3ULzuK8AtB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwyI_Sn7LYvDBW6_fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyiyJc_zuTmGzz1Y594AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKimS4luJJTkK3rAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]