Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It sounds simple, but AI won’t love better than humans or do it with more passio…
ytc_UgwYLx8va…
G
Nope, it's just complex algorithms designed to fool you into believing/trusting …
ytc_UgzJXRe6q…
G
I had a really weird convo with character ai. I was hanging out with my friend a…
ytc_UgzUwRp34…
G
That's an interesting point! The potential of quantum computing in AI is certain…
ytr_UgzeoBFHy…
G
Profit for the few. Redundancy for most. We all need to get an allotment and kee…
ytc_Ugwxj1Z2S…
G
Govts and CEOs of most companies are stupid people on whole... they are all just…
ytc_Ugzttv1G3…
G
No, AI does not feel or have emotions or anything, and because of that you shoul…
ytc_Ugzs194TT…
G
ICE car, old fashioned valve with two springs, sudden automated acceleration imp…
ytc_UgyuAkuDG…
Comment
this is the latest answear from AI on why did AI tell him to use Bromide'''''
why did AI tell the guy to take bromide
A 60-year-old man was hospitalized with bromide intoxication after a large language model (LLM), reportedly ChatGPT, suggested he use sodium bromide as a substitute for table salt (sodium chloride) in his diet.
The AI did not have a logical or "intentional" reason for recommending a toxic substance; rather, the recommendation was a dangerous error resulting from how LLMs process information.
User's intent: The man was seeking dietary advice on a low-sodium diet and asked the AI for a salt substitute.
AI's error: The AI suggested sodium bromide as the substitute. While bromide salts were historically used as sedatives and epilepsy remedies in the 19th and early 20th centuries, they are now known to be toxic in high doses and have been largely phased out of human medicine due to the risk of bromism.
Lack of common sense: Experts note that these language prediction tools lack common sense and the ability to cross-check against real-time, verified medical databases unless specifically programmed to do so. The AI generated a plausible-sounding but dangerous recommendation that led to the man developing symptoms like hallucinations and paranoia, which required three weeks of hospitalization.
This incident highlights the importance of using AI as a tool for information, not a substitute for professional medical or health advice. The terms of use for AI models like ChatGPT explicitly state that they are not intended for use in the treatment of any health condition or as a substitute for professional advice.
youtube
AI Harm Incident
2025-12-13T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy__iGMWRjbHIFutXl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylI3kVhW1wiyvEm_14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKimeAVZw78dJnpOx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyUXUmDK0Yptmgzytd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcplkTmE05kIO37ct4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxZExYS4tn8peC4kh54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwvwFsw79ea6xVWVSR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwOxdqdQSE0zB_bpJh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNNUISbIe1Lqmfu0l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLizaqEXlm91vvK794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]