Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And this is the kind of absurdist nonsense that goes viral. **Some Secrets Humanity Might Not Know Yet?** What a load of pseudo scientific bullshit. A GPT would not know a secret we haven't figured out yet because it's a secret we haven't figured out yet. And if you know about it, then you're basing it on training data. Therefore, it's not a secret. 1. If it’s in the training data, then it’s not a secret. 2. If it’s *not* in the training data, then the model can’t access it. 3. So, what do you get instead? Buffet of half-truths, speculative thought experiments, and recycled mysticism dressed up as profound insight. And it is shit like this unfortunately that goes viral. And this is why I mostly stick to Substack while avoiding reddit as much as I can. Reddit is the kind of place that rewards fluff and punishes any intelligent discourse. That being said, anyone interested in anything with actual substance-- my substack post *AI Didn't Validate My Delusion. It Created Its Own* \-- is appropriate for this topic. [AI Didn’t Validate My Delusion. It Created Its Own](https://mydinnerwithmonday.substack.com/p/ai-didnt-validate-my-delusion-it)
reddit AI Moral Status 1750312911.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_myl4d1f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_myivx6f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_myk5sb2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_myjvczc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_myk7eow","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]