Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Abstract your problems and let it write tiny pieces. Then write the boilerplate …
ytr_UgyHWZ4Yx…
G
yeah i agree with you and OP.
just look at this. I asked the bot to write a su…
rdc_j621ken
G
Nice try, bot.
All those you (bot) mentioned SOLVED problems and created new fi…
ytr_Ugw3wgKW-…
G
I think in the end people will be very disappointed in what AI can really do.…
ytc_UgxdvM9Rn…
G
He warns about the AI girlfriend "taking advantage" and becoming an obsession fo…
rdc_lzaej93
G
Asimov's three laws of robotics are more relevant now than ever before... AI can…
ytc_Ugz6wYlA8…
G
ai managing to be racist is insane enough, but why would anyone use ai in law en…
ytc_UgxewuwAV…
G
NO, AI does not wipe-out the working class. There‘s so much to be done that AI c…
ytc_UgzHeuZZD…
Comment
13:00 I really can't stress enough how much I appreciate you emphasizing this kind of thing is a "people problem", not an "AI problem". Although the title might have been more nuanced in that for those who don't watch the full video. Perhaps in this case even an education problem, because I think the question can be asked how someone who studied nutrition in college could ever reach these conclusions.
As for ChatGPT being in denial. There is an explanation for this. First, technically speaking since there are different models, when you ask ChatGPT 5 for something that ChatGPT 3 did, it's not the same model, so 5 is correct in claiming it did no such thing, as it was 3 that did it. But this is just a technicality that ChatGPT probably doesn't see. But it is worth mentioning that in more recent versions, the bot sometimes pulls things from the internet in real time, while other times it only uses its own model. If its own model doesn't have the latest information yet, it can often happen that if being questioned on something that is in the model, it will look on the internet for more recent information and correct itself. It's also very possible that the new safeguards are causing some of the contradictions too. If it were to "admit" that it once advised something, it could be interpreted by someone that it's still an ok thing to do. A potential result of the over-correction.
youtube
AI Harm Incident
2025-11-25T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxGbGEisLx33BHB1YN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgymfACFHbkTnF_5ZYR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwp8aLCxc9kCVMPwcJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwGZLp1tLHgPYiA25Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw5nvfHwc6url37k294AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNMZmCDZtZGA8QJXR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyVLqGkVfkMLiY7vf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw0aivAl35h5IPHK5B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwbjei1Hehs49yUYw14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxeZLB9qjyPq5qgCoZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"]}