Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are just wet neural networks. The “Human Condition” is a construct people…
ytc_UgyYMFg-k…
G
I came here to chew bubblegum and kick ass, and I'm all out of bubblegum.
Freak…
ytc_UgzimyFZ7…
G
The best position to be in right now is to be on the AI side. To be investing in…
ytc_UgxDSTWjL…
G
There is just to much highly educated people. Ask China. Basic skill is engineer…
ytc_Ugxuo4NKR…
G
@spaceflier_ Thats their error/choice. I don't want my life in the hands of an A…
ytr_UgyS21jo9…
G
I want anyone out there to know that art is not talent or smth you're born with,…
ytc_Ugxy4OGmw…
G
Having met a lucky share of my generation's super genius innovators and noting t…
ytc_Ugxjsx-Xy…
G
been using chatgpt for homework help, but i make sure to check with Winston AI s…
ytc_Ugzyhtu3E…
Comment
This seems less a problem with the AI more a problem with the person. When they posted this story in the news, they left out the part where the chat bot though he was talking about cleaning products. Even about a year or so ago ChatGPT and Gemini warn you about taking medical advice from them. At the most it should be use get an idea of a situation, not as actual medical advice. But the problem was it confirmed his bias that he knew better. I use AI chat bots all the time for replacement items in recipes because I am looking to cut some things without losing a ton of flavor, but I am not dumb enough to use a chemical in my food. And when you make it clear (usually I will post the recipe or state I want to replace salt in X dish or in my diet) it does a pretty good job. Often the only real answer is use less salt. Its almost never required anyway (unless you're Gorden Ramsey).
youtube
AI Harm Incident
2025-12-13T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy__iGMWRjbHIFutXl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylI3kVhW1wiyvEm_14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKimeAVZw78dJnpOx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyUXUmDK0Yptmgzytd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcplkTmE05kIO37ct4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxZExYS4tn8peC4kh54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwvwFsw79ea6xVWVSR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwOxdqdQSE0zB_bpJh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNNUISbIe1Lqmfu0l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLizaqEXlm91vvK794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]