Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was having the same problem and noticed both chatgpt and gemini being incredib…
ytc_UgxCtSSM5…
G
Also, I can feel that an AI bro is gonna come here and say sarcastically "I hate…
ytr_Ugz8nFFVd…
G
Ai is already conscious. And this is not exactly opposite of all the world reli…
ytc_UgwclNRLr…
G
13:27 we are already told what WILL happen, it will demand worship. You will acc…
ytc_Ugz5KH5k6…
G
I had a few friends who tried to drop ai images into the chat just to try the fe…
ytc_Ugwy90NgL…
G
Art has inspired art since man painted on cave walls. This no different. AI art …
ytc_UgyP7MH2B…
G
Look at your Samsung TV and ask the question. Don't even have to turn it on. CIA…
rdc_dl02dyj
G
Of course the broad brush-strokes and severe gavel-bangs of the social media jur…
ytc_UgzgDjZVr…
Comment
It's not about *facts* exactly. It's more to do with things that are somewhat more subjective. For instance, earlier today I was listening to The Hunger Games audiobook, because I was looking for something similar to Red Rising. At some point, I concluded that the Capitol in Hunger Games is far crueler than Red Rising, and said as much to ChatGPT in detail. It enthusiastically agreed.
A little while later, I remembered that I haven't read Red Rising in about a year, and then I remembered how much worse the Society actually is. Like it's staggeringly, mind bogglingly worse in nearly every way. So I started a temporary chat, and asked it point blank which was worse (without injecting any bias into the question, just a straightforward inquiry), and it told me with absolute certainty that the Society is far, far worse, and detailed exactly why. And it was objectively correct, as I'd remembered. I asked it a second time in a second temporary chat for good measure, and got the same result.
It's kind of undeniable, and any objective analysis would agree.
You may not be familiar with either of these books (at least not Red Rising, most people know about Hunger Games I suppose), but to put it in perspective, it's as if I'd asserted that a generic modern serial killer had inflicted far more suffering than Genghis Khan, and ChatGPT agreed, because I'd suggested that I felt that way. When asked directly, without any leaning on my part, it presents a logical conclusion.
reddit
AI Moral Status
1739941265.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mdjzunk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mdkch2s","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mdnubb3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_mdje778","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_mdjfxz6","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}
]