Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Soon politicians will loose their jobs to AI when it gets better at generating l…
ytc_UgxSH5dA4…
G
How does she just keep getting hotter and hotter? And now she’s passionate about…
rdc_nm1xiin
G
This woman is a reporter for the Wall Street journal and other publications who …
ytr_Ugz3nEgRs…
G
The reality is that we need to ADAPT and work WITH AI. This isn't going away it …
ytc_Ugy1gixC1…
G
Am I missing something or would it not require businesses as a whole to willingl…
ytc_UgxZ6qpoe…
G
The point was more subtle I think- if AI can predict with decent accuracy how th…
ytr_Ugywn7_BG…
G
AI has already figured out how to go wireless EVERYWHERE.
Remember Tesla's wirel…
ytc_UgwJPYvDb…
G
F. S. Ha ha, if they go with face recognition, I imagine teens will go out weari…
ytr_Ugiew_Ebk…
Comment
You don’t actually present any evidence for this so called research. The research you actually provide tells a much different story. It says *sometimes* the AI did harmful decisions, but it also makes it clear that not most or all of the AI’s decisions are actually like this. It is also only one report, when you have only one group testing reporting this data than it is not reliable. Additionally AI is not conscious, so it cannot make malicious decisions. Malice requires intent. An AI that is operating simply off of logical systems is just a program, so it reflects human intent.
youtube
AI Harm Incident
2025-09-13T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwcCPRE8AbtQXFw87F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwofSm8-R-LqC5qaOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHScYMWSNnpoZAEGh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUwFwBu_zzVsSoNCp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDAia-kocHbn4fHk14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBg0Vpe4Poj_0mEGN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7ZcG2yLuBIQ5Y6Hl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRJy5GUPaMdYaCpLN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx36D-Sn_aU2j_Ob5p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxmq-7Vroil4nXYuNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]