Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@gogueta3628 i don't think you're 100% right about that statement, AI can be and…
ytr_UgxzDvfyZ…
G
"only 4% of occupations, and very few roles could be 100% automated" maybe in th…
ytc_UgwR2qLx3…
G
if they are conscious because they developed consciousness then of course you ca…
ytc_Ugi8KLtUp…
G
At this point ai """""""""""""""""""""""""""""""""""""""""""""""artists"""""""""…
ytc_UgwYobAgL…
G
Blake I can honestly say I think at least part of me is in your A.I. person beca…
ytc_UgwEwezhE…
G
So... the AI was right about McDaniels. So we should continue to use it. All tha…
ytc_UgxzltMzc…
G
Going to get a brisket sandwich now because of this discussion. All this talk ab…
ytc_UgwZ6oP1X…
G
Wot is Ai ..wot u hiding why u I.. t shirt I t..A I.. y .u use..?…
ytr_UgxxExru0…
Comment
So basically, you can easily replace the word "AI" with "Internet" and have the exact same case, and same substance, as this guy could have used basically anything, to reinforce his views.
And people do that all the time, its called confirmation bias. Want to believe that the earth is flat? You go online and you find websites that reinforce your views.
Blaming AI for this, specifically ChatGPT is quite disingenuous, and all those journalists and sites that put the term "ChatGPT" and/or "AI" in there as a cause for this incident, are simply dredging up fear over something that is not only innocuous, but its actually BETTER than consulting with humans about this, because the humans are all over the place and just believe whatever they want to believe.
Also, it doesnt surprise me that the version of ChatGPT was 3.5, a now very old and unused version which is quite inferior to both 4o and 5.
But the worst part of this is that nobody showed us this chat log.
What response did the AI produce, and what was the prompt for that response?
Where is the evidence?
Nowhere.
Either deleted, which is suspicious, or it never existed, because I seriously doubt that even ChatGPT 3.5 would actually advise anyone to replace chloride with bromide in their diet.
So what I gather from this story, is that the man had already false pre-conceived notions about these elements, strong biases and a mentality of "I know better than everyone else", went to ChatGPT, and as soon as ChatGPT gave him the answer that he wanted, he went "Ah-Ha! I was right all along!!! Now lets do this!", while ChatGPT merely gave him a general answer that both of these elements can be used in cleaning products to do the same job, so they are in effect interchangeable.
Or perhaps told him that there is no significant difference between them in certain specialized use cases, such as lab demonstrations or experiments, etc.
Who knows?
All I know is that there are people that cant wait to blame AI for everything, when its actually much much safer than talking to a human, because humans will tell you the most bizarre, irrational, false, outlandish things, and even when licensed and certified, working clinicians make big blunders, we still trust them and go to them for medical advice, thinking that they must know what they're doing, after all, look degree, look license, look lab coat and everything.
AI corrects people when they are wrong constantly. I've seen in ChatGPT and Claude, Gemini, and others. Its just that it tends to do it in a soft, gentle way, and not in a "in your face" kind of way, to avoid offending anyone.
It plays the role of helpful assistant who draws from the already existing human knowledge, but we still manage to find excuses and ways to blame it for our own incompetence every chance we get.
And I know for sure that if you want accurate reliable information, go to the guy with the metal head.
Because even licensed clinicians disagree with each other all the time, and miss obvious things, even on the same test, same lab result, same scan, same symptoms, same patient, same everything. Their incompetence/failure is a lot more frequent than what should be acceptable, and they each are super expert geniuses that cannot be wrong, even when they obviously are, all while their opinions are in conflict with one another.
youtube
AI Harm Incident
2025-11-30T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwQ8QsoT8m6vt5bRad4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3dDwNATyn56jIw7d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVZ3DhbAoiSUteXZ54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxp5FtsbZ-AVkJUSsF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgziYImOcnpgKplytEZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSRj_CUpIpSl7-G_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzjl00Li-2R0e3CZrx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTO459PZG0-gGPBE14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_IDc6JPmE9sJsnPJ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvTna4P_j-zQezzQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]