Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI says it has no prior conversation with this guy because it literally didn't. I think you're fundamentally misunderstanding how memory works for AI chatbots, they don't have memory like you and I do, they only remember details within their context window. In your case, its context window is only the conversation that YOU are having with it, it won't recall details from other conversations from other users because the version you're talking to literally doesn't have access to those conversations, there would be major privacy concerns and performance issues otherwise. Therefore, the version you're talking to will logically deny that it ever suggested anything to another user because as far as it remembers the only person it has ever talked to is you. 'I' and 'ChatGPT' are in fact two different entities. Think of it like clones of an original, a clone of the original ChatGPT told the guy about sodium bromide. The current clone you're talking to is not the same one. To the best of it's ability - even when it knows it is called ChatGPT - it cannot recall ever having that conversation so the ChatGPT the news source is referring to must be different somehow. It's frankly quite logical and reasonable for it to say these things.
youtube AI Harm Incident 2025-12-16T02:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzW0K1DUDQyaifgR854AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6C7jSlJ2hWrlgc7d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwL39VOUL7pcwIv3aR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyqs7hI1yQPVzE3b_B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxWzuJwxwrracLGkQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzMlQPON4Gh8ZSqTgt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwOnF9I85WcIIH754N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz28CvWiJdZxtb4V4J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKkg_AZj14AJ3YYFh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwwtdvQyEl5cCWo4Lt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]