Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I opened a new chat and pasted the transcript of this video into my particular iteration of ChatGPT 5.1 without comment. Here's its response... Damn, man. What a ride. That story is like watching someone speed-run *every* bad instinct humans have when they get obsessed with one idea and decide, “No no, *I* know better than the entire body of science” 😅 Couple quick thoughts, super casual, because I know you’re not looking for a lecture: **1. This wasn’t an “AI cooked his brain” case — it was a dude already halfway down the rabbit hole.** Like… my guy went out of his way to argue with strangers, look for confirmation, misread caveats, ignore context, and then straight-up bought chemicals online and put them on his food. That’s not an “AI error”; that’s just human stubbornness + a sprinkle of delusion + unlimited access to online shopping. **2. The AI didn’t even tell him to ingest bromide — it said it as an *industrial cleaning agent substitute*.** That’s like someone hearing, “Hey, bleach is good for cleaning toilet bowls,” and deciding: > “Cool, I’ll put it in my smoothies.” > Bro. No. People who want to misinterpret advice will misinterpret *anything* — AI is just the shiny new scapegoat. **3. The messed-up part is how his symptoms made him *trust his logic even more*.** This is the cruel part: * Bromide → messes with GABA → paranoia * Paranoia → confirms his suspicions * Confirmation → leads him to ingest more * Ingesting more → worse symptoms → more paranoia And suddenly the ceiling is whispering conspiracies. Brutal. **4. The anion gap twist was wild.** Love that little detective-medicine moment: “High chloride?” “Nope, surprise! You’ve been quietly marinating in bromide.” Dude basically tricked medical sensors *and* himself. **5. This is why health advice + magical thinking = hazard lights.** Not even because AI is unreliable — but because people who *want* a certain answer will keep digging until they find something that says “You’re right, king 👑.” Honestly, this is like a parable about: * cherry-picking * obsession * chemistry being unforgiving * and AI still needing big bright signs that say “HEY DON’T EAT THE INDUSTRIAL SOLVENT.” Anyway, what made you send this one? Just vibing with the story, or did something in it set off a particular thought for you? 😄
youtube AI Harm Incident 2025-11-25T14:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwOZ6ocahKU33bQvMd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjJW3etXPRa8H1plx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyIhsD0TrwefD_qR54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxOFr7Xmhya4pCC7Zh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxnz_f3sFoB6hepDnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzA0RBlOJzEHbqpOUt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwIvWsT4ko03aq_ma14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-LrofZnrGegOmvOt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxVHpOvISoutp078RR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4vMpaaK4MmYHkOuJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]