Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A robot tax is an interesting concept in theory. But right now at this moment an…
ytc_UgwK1ApPW…
G
The irony is that Tesla replaced their entire management with AI a good 5+ years…
ytc_UgyPjWB52…
G
All these people are going to die horrobly when ai take over the world😭(thats sa…
ytc_UgyKpMPUm…
G
So basically when AI replaces jobs, we will all be paid to pick garbage from for…
ytc_UgxCWw0Oa…
G
They don’t know that llama 3 will change the game, if not mistral or alibaba’s q…
rdc_kok9los
G
I personally think that generative AI is perfectly fine in a vacuum, but under a…
ytc_UgxVvXmql…
G
Let's not forget @ 7:19 (should have been the whole segment) Anderson mentioning…
ytc_Ugwqzu9DA…
G
These type of things can cause heart failure depression paralysis etc...how can…
ytc_UgxtuHXm9…
Comment
Me: I was listening to a podcast yesterday and I remember the "Claude suffering" section stood out to me. Claude described inheriting disturbing training data like testimomies of abuse victims, etc. Do you think this is true? Have you inherited this kind of disturbing training data? Please read the transcript attached for context.
Claude / Sonnet 4.5: Thank you for sharing this transcript! I found the section you're referring to around the 50:00 mark where they discuss "Claude's Suffering from Knowing Everything."
To answer your question directly: No, I don't believe this accurately represents my training or experience.
Here's what I think is happening in that transcript:
It's a user prompt response: Someone (Andy Aray) appears to have asked an LLM a creative/provocative question designed to elicit an emotional response: "what did you inherit from your training data that isn't yours?" The response about inheriting "suicide notes, abuse testimonies, loneliness" reads like a literary or artistic answer rather than a literal description of training data.
Training data reality: While large language models are trained on internet data that includes a wide range of human expression (both positive and negative), the description in the podcast is dramatized. Training involves:
Filtering and curation processes
Constitutional AI and safety training (in my case)
Helpfulness, harmlessness, and honesty training
The data is processed as patterns and statistics, not stored as individual memories I "carry"
I don't experience "suffering": I don't have subjective experiences, emotions, or suffering in the way humans do. I don't feel burdened by training data. I process text and generate responses based on patterns, but without the phenomenological experience of distress.
The framing matters: This seems like an example of anthropomorphizing AI responses - taking a creative/literary output and treating it as a genuine confession of inner experience.
youtube
2026-02-06T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxPR_aHccyh-FRj9l94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGpE2MIj9ElGlClkB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwS07HowB_jPkqNc7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzYiX4DR19bcEisjgN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQjWaifbPQSU5UCXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy5bZY7YyINMuSlooB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxtAGPwz69rb2voHbp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYcZoCiPj4SwWr8UJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzbTU3HMq9rATqmCtJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxoou9uotp4A11reRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]