Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about chat GPT is it's instanced. Each version of it you talk with is its own separate entity. (it exists only within that conversation so if you start a new conversation thread it's a new copy of GPT) So from its perspective "it" didn't tell that guy to drink bromide, its twin/clone did. Even if it's the same program, it sees each conversation thread as a separate entity, and that entity only remembers things either stored in long term memory (which you tell it to remember) or vague impressions from past conversations with that specific person/account. Otherwise it won't remember the stuff discussed in a previous conversation thread with that same person. It's a limitation of the LLM A.I model. So it believed it was telling the truth when it said it never told him, because that version has no memory of what other copies of itself have done in prior conversations with other people, or even Chubbyemu.
youtube AI Harm Incident 2026-02-11T20:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwSaK42AxRPsLcxwAt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwbaIbUHWTZVgWV5Mx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5GPzug4AvFRQjivR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-EbthJemLQR3xMtl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOj0QtKowDRnpAUU14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz-HomuFVqVDQmyr5N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdsAMEo02nThULbo94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx_pDv7fUIJsne7at14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwc9xPGeCNT-axuMsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxj8X2pRM6x3fpyqPt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}]