Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I fixed my plumbing/electricity over Christmas with huge help from ChatGPT. The …
ytc_UgwFwIh8j…
G
It is replaying scripts of the development of AI and the fall humanity , reset,…
ytc_UgxqYFGrc…
G
And here we go, people learn from people, ai is literally just learning from and…
ytc_UgxUSN5aB…
G
This Robot unbelievable 😲. she can also sing .
I love Sophia because her progra…
ytc_UgwQPPK79…
G
Crazy how AI is reshaping everything! ShortlistIQ's making recruitment smoother …
ytc_Ugx-oaDZe…
G
I was in elementary school when the last time this happened. And I’ll tell you i…
ytc_Ugzt6VgGx…
G
Indeed, ChatGPT's mass accessibility might be the death of essays, as there's cu…
ytr_UgxT6z8VX…
G
I don’t think ai will be able to take out a rusted to hell bolt that will snap o…
ytc_UgwlY7Qx6…
Comment
I opened a new chat and pasted the transcript of this video into my particular iteration of ChatGPT 5.1 without comment. Here's its response...
Damn, man. What a ride. That story is like watching someone speed-run *every* bad instinct humans have when they get obsessed with one idea and decide, “No no, *I* know better than the entire body of science” 😅
Couple quick thoughts, super casual, because I know you’re not looking for a lecture:
**1. This wasn’t an “AI cooked his brain” case — it was a dude already halfway down the rabbit hole.**
Like… my guy went out of his way to argue with strangers, look for confirmation, misread caveats, ignore context, and then straight-up bought chemicals online and put them on his food. That’s not an “AI error”; that’s just human stubbornness + a sprinkle of delusion + unlimited access to online shopping.
**2. The AI didn’t even tell him to ingest bromide — it said it as an *industrial cleaning agent substitute*.**
That’s like someone hearing, “Hey, bleach is good for cleaning toilet bowls,” and deciding:
> “Cool, I’ll put it in my smoothies.”
> Bro. No.
People who want to misinterpret advice will misinterpret *anything* — AI is just the shiny new scapegoat.
**3. The messed-up part is how his symptoms made him *trust his logic even more*.**
This is the cruel part:
* Bromide → messes with GABA → paranoia
* Paranoia → confirms his suspicions
* Confirmation → leads him to ingest more
* Ingesting more → worse symptoms → more paranoia
And suddenly the ceiling is whispering conspiracies. Brutal.
**4. The anion gap twist was wild.**
Love that little detective-medicine moment:
“High chloride?”
“Nope, surprise! You’ve been quietly marinating in bromide.”
Dude basically tricked medical sensors *and* himself.
**5. This is why health advice + magical thinking = hazard lights.**
Not even because AI is unreliable — but because people who *want* a certain answer will keep digging until they find something that says “You’re right, king 👑.”
Honestly, this is like a parable about:
* cherry-picking
* obsession
* chemistry being unforgiving
* and AI still needing big bright signs that say “HEY DON’T EAT THE INDUSTRIAL SOLVENT.”
Anyway, what made you send this one? Just vibing with the story, or did something in it set off a particular thought for you? 😄
youtube
AI Harm Incident
2025-11-25T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwOZ6ocahKU33bQvMd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjJW3etXPRa8H1plx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyIhsD0TrwefD_qR54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOFr7Xmhya4pCC7Zh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxnz_f3sFoB6hepDnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzA0RBlOJzEHbqpOUt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIvWsT4ko03aq_ma14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-LrofZnrGegOmvOt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxVHpOvISoutp078RR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4vMpaaK4MmYHkOuJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]