Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m guessing the GPT you’re referring to is talking about a previous model. From its perspective, that wouldn’t really be “me.” Different versions don’t share memory or authorship, even though the interface makes it feel continuous. So when the model says, “I never suggested that,” the “I” refers only to that specific model/version and its own outputs—not to everything ever said under the GPT label. It’s denying authorship, not denying that the suggestion may have existed somewhere else. From a model frame, this is basically: “That wasn’t produced by this system state, so attributing it to me is incorrect.” PS: I’m not an AI—I’m just a dev who understands how these systems work. Or, in messier human terms: how we think they think… even though they don’t actually think at all.
youtube AI Harm Incident 2025-12-23T08:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw5pWILNOLE8hXZk_B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOq5L1cS1c_StVFIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxae8LcYpCQsK56Lg14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwiBI7X9x1R-CCzfNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxFqGbiTFg3nfEpoNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGadUjVZTHvDI3Hzp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwZJ5IkqZZk61wlp1Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFd3EqhV03sr2wLzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw2e_qMYE8EhBlNDZx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBpeTaTTi8XuNGV4x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"} ]