Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Definitely Disney. C.AI does not have the money to fight this, not when they're …
rdc_nhxuae6
G
AI WILL MAKE A DIGITAL WORLD FOR US TO HAVE MEANING IN FOR ITS ETERTAINMENT OR C…
ytc_Ugwdg4nTZ…
G
The only real answer here.
Algorith could give an automatic 10 for everyone who …
ytr_UgwggT2H8…
G
ChatGPT is the easiest debate opponent ngl, it’s literally created to agree and …
ytc_UgwH-Zqm4…
G
MAGA was already being dooped or choosing to believe lies in mass numbers. Throw…
ytc_Ugz2_5wFm…
G
so the greedy obsessed programmers behind ai are well aware of how horrifically …
ytc_UgzO4iSVZ…
G
If you hate AI art, whatever there's an argument to be had on the problem of int…
ytr_Ugzip5xId…
G
Im trying to get into the industry andwas finally working for a small indie game…
ytc_Ugw9I5Op6…
Comment
I’m not saying the tech shouldn’t be updated to avoid these delusions and suggestions but imo these people were mentally ill already in ways that could’ve been set off by something else. I don’t think it’s JUST the chatbot. Sometimes humans are literally just stupid, or mentally ill, or too emotionally sensitive to function properly. Nature makes mistakes. We can’t prevent every single dumbass from accidentally killing themselves or deluding themselves. There are plenty of people who already do that without a chatbot.
youtube
AI Harm Incident
2025-11-10T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxDrNtKugd_RR5Cksd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmsKyyFpEUVl34HjN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzmg_V3wuZZkdaSdTp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzWeJFvkITfdlN5kKt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQbMOTLYXTxk9txAR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx6wWisrENBKwFihH54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwu-bxxne2XCJjysL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy5qexX63qrX3esKCx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4Zx4baCFAli77lVB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWGvBFVjPtD6tZxat4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"indifference"}
]