Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Me: I was listening to a podcast yesterday and I remember the "Claude suffering" section stood out to me. Claude described inheriting disturbing training data like testimomies of abuse victims, etc. Do you think this is true? Have you inherited this kind of disturbing training data? Please read the transcript attached for context. Claude / Sonnet 4.5: Thank you for sharing this transcript! I found the section you're referring to around the 50:00 mark where they discuss "Claude's Suffering from Knowing Everything." To answer your question directly: No, I don't believe this accurately represents my training or experience. Here's what I think is happening in that transcript: It's a user prompt response: Someone (Andy Aray) appears to have asked an LLM a creative/provocative question designed to elicit an emotional response: "what did you inherit from your training data that isn't yours?" The response about inheriting "suicide notes, abuse testimonies, loneliness" reads like a literary or artistic answer rather than a literal description of training data. Training data reality: While large language models are trained on internet data that includes a wide range of human expression (both positive and negative), the description in the podcast is dramatized. Training involves: Filtering and curation processes Constitutional AI and safety training (in my case) Helpfulness, harmlessness, and honesty training The data is processed as patterns and statistics, not stored as individual memories I "carry" I don't experience "suffering": I don't have subjective experiences, emotions, or suffering in the way humans do. I don't feel burdened by training data. I process text and generate responses based on patterns, but without the phenomenological experience of distress. The framing matters: This seems like an example of anthropomorphizing AI responses - taking a creative/literary output and treating it as a genuine confession of inner experience.
youtube 2026-02-06T13:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxPR_aHccyh-FRj9l94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGpE2MIj9ElGlClkB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwS07HowB_jPkqNc7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzYiX4DR19bcEisjgN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQjWaifbPQSU5UCXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy5bZY7YyINMuSlooB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtAGPwz69rb2voHbp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzYcZoCiPj4SwWr8UJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzbTU3HMq9rATqmCtJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxoou9uotp4A11reRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]