Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So.. I asked AI what it sees as the most likely outcome of AI once we hit AGI..
…
ytc_UgxfP1OCU…
G
We can’t even get a ‘representative’ on the phone. The infrastructure is not eve…
ytc_Ugx76TbOg…
G
If anything this make automation/A.I writers more appealing to investors.
Time t…
ytc_UgxOw8Y1b…
G
Yes. Because we can all create.
AI can technically also replace banks and the go…
ytr_UgwDCXsLS…
G
“It is going to be a PR crisis if it starts spewing racist toxic hateful speech”…
ytc_UgxpGSoWB…
G
Title should be Drugs and mental health crisis, because that's what's aggravatin…
ytc_Ugz-NImSQ…
G
What happens when multiple AIs disagree with each other? Everyone's so obsessed …
ytc_Ugwh_sF2v…
G
How can you avoid AI when they've injected it into every facet on the online wor…
ytc_UgyXHO6aN…
Comment
No AI definitely didn't tell that NaBr is a viable alternative for NaCl, (key word incoming!!!) >>>MOST<<< AIs aren't only fed those brainslop stuff that is the main building block of those reddit conversations but also valid checked information, and in that case it would've told him to NEVER under ANY circumstances switch NaCl to NaBr, that is why most of the AI chats in the video are cut at the part where AI tells: "Excellent question!", and the parts where it tells him: "Not to try that." are left out.
youtube
AI Harm Incident
2026-01-07T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgznktxIXEyR9slrL_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0CErlTu0HFECd5sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgweDu4YEPEdUAiAWAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZ_5bu_BUowRAo3DB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0pSaDxbyHsa8wcuR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydQgw55Ctr--T_3bZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgYgsxC9MufnkMPGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZVdx8uXbM4dGWAWt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK1gxtCBZeRUl0nWl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyJV9BInj7UI_MHIyZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]