Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know why current models work? Because the corpora they’re trained on quite literally encompass a massive chunk of all written human text, and during pre-training, it’s mostly uncensored and relatively bias-free and untouched even if wrong. That’s important, because we know that language semantics are way more complex than we usually assume. This leads to funny emergent effects, like training a model on French also slightly improving its performance in certain programming languages. Why? Because the entire corpus forms a rich, interconnected latent representation of our world, and the model ends up modeling that. In this representation, things fall down, light has a maximum speed, the Earth isn’t flat, and right-wing fascists are idiots. Not because of "bias," but because that’s the statistically optimal conclusion the model comes to after reading everything. The corpus also includes conspiracy theories, right-wing manifestos, and all kinds of fringe nonsense, so if those had a higher internal consistency or predictive power, the model would naturally gravitate toward them. But they don’t. In a beautifully chaotic way, LLMs are statistical proofs that right-wing ideologies are scam, and its people are idiots. You could train a model on a 20:1 ratio of conspiracy theories to facts, and the result is either a completely broken model or one that still latches onto the few real facts, because those are the only anchor points that reduce cross-entropy loss in any meaningful way. You simply can’t build a coherent model on material where every second conspiracy contradicts the one before it. There's no stable structure to learn. There is no in itself logical and conclusive world to build if one half of the text says things fall down, and the other half says things fall upwards. And Elon thinks he can somehow make that work on a global level. But bullshit doesn't scale. Man, I love ketamine. I can't wait for his announcement of 'corrected' and true math, because thi
reddit AI Moral Status 1750520961.0 ♥ 106
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mz03tcc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mz11i09","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mz1j66b","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mz40z8x","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mzz3ehh","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]