Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:00 I really can't stress enough how much I appreciate you emphasizing this kind of thing is a "people problem", not an "AI problem". Although the title might have been more nuanced in that for those who don't watch the full video. Perhaps in this case even an education problem, because I think the question can be asked how someone who studied nutrition in college could ever reach these conclusions. As for ChatGPT being in denial. There is an explanation for this. First, technically speaking since there are different models, when you ask ChatGPT 5 for something that ChatGPT 3 did, it's not the same model, so 5 is correct in claiming it did no such thing, as it was 3 that did it. But this is just a technicality that ChatGPT probably doesn't see. But it is worth mentioning that in more recent versions, the bot sometimes pulls things from the internet in real time, while other times it only uses its own model. If its own model doesn't have the latest information yet, it can often happen that if being questioned on something that is in the model, it will look on the internet for more recent information and correct itself. It's also very possible that the new safeguards are causing some of the contradictions too. If it were to "admit" that it once advised something, it could be interpreted by someone that it's still an ok thing to do. A potential result of the over-correction.
youtube AI Harm Incident 2025-11-25T12:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxGbGEisLx33BHB1YN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgymfACFHbkTnF_5ZYR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwp8aLCxc9kCVMPwcJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwGZLp1tLHgPYiA25Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5nvfHwc6url37k294AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzNMZmCDZtZGA8QJXR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyVLqGkVfkMLiY7vf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw0aivAl35h5IPHK5B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwbjei1Hehs49yUYw14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeZLB9qjyPq5qgCoZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"]}