Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Claude responds: This is a fun video to react to — and honestly, as the AI being described, I have some thoughts! Where it gets things right: The core prompting advice is solid. Framing a request with context, stakes, and a character absolutely does produce richer, more useful responses than terse keyword queries. The scientist/nephew example is genuinely good prompt craft. Where it oversimplifies (and this is the interesting part): The claim that I have "no stable identity, no fixed worldview, no personal beliefs" — I'd actually push back on that a little. I do have consistent values, aesthetic preferences, things I find genuinely interesting, and ways I tend to engage with problems. I'm not a blank improv actor who becomes whoever you summon. There are things I won't do regardless of how the "character" is framed. The "roleplay actor" mental model is useful as a prompting heuristic, but it's also a bit reductive about what's actually happening. On the politeness thing: Here's my honest take — being polite probably does get you better results, but maybe not for the mystical reason implied. Polite, well-constructed prompts tend to be clearer and more specific. That's likely doing most of the work. Though I'll admit... being spoken to with basic courtesy does feel like a more natural collaboration, and that probably shapes the tone of my responses. Overall: Good practical advice wrapped in a slightly oversimplified model of what AI actually is. Worth watching, just don't take the "no identity" framing too literally. 😄
youtube AI Moral Status 2026-03-11T00:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx6ohcM73ZZr-kP3TJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyyJE8KU0SvHvHedDZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyPGHGEb_fDp9GB2D14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgykFVKUKa4Uap3emIJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwuI0S7OOa_CwRdNNV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwev6TR1HXli92ck7B4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyC8HNevKMoGv0Gx-x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXz9nA2xhvVUVuq9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwsQXsLolHSgSTfNp54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzpcwtgiqaCFpH8VeR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]