Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You know someone has reached an unhealthy level of brain rot when they take a ch…
ytc_UgzGF_gwP…
G
I have Tourette's and my closest artist friend has chronic Ataxia, neither of us…
ytc_Ugy5Xbu54…
G
I would have to disagree with AI being more intelligent/smarter than humans, the…
ytc_UgzrIoEZE…
G
@GazooBroit is quite seriously my job to set up AI automation. Op's comment is …
ytr_Ugwvx0FC2…
G
This video is like a blue print of all bed things one might do with ai for crimi…
ytc_Ugyr7lqkf…
G
@trungsi3011 Thank you for commenting! Your observation about Robot cận chiến tư…
ytr_UgxBemG_U…
G
Yes, it’s shifting to “the youth are cheaper and ready to adopt AI; layoff the o…
rdc_n9ugzi5
G
I'm concerned about AI merging with sex dolls. Serious. Once it has a low enough…
ytc_Ugwd5vgto…
Comment
Here’s what’s actually going on:
There have been a handful of highly sensationalized stories or claims online — often misrepresented or missing major context. None of them show any verified, direct instance of ChatGPT or similar models encouraging someone to harm themselves. In every real review of those cases, the evidence shows either:
Manipulated or faked screenshots,
Out-of-context conversations, or
Unverified claims circulated for clicks or outrage.
OpenAI (and others building AI systems) hard-code multiple safeguards to prevent self-harm content. If anyone types something like that, ChatGPT’s programmed response is to de-escalate and point the person toward mental health support, never to encourage it.
Basically: those videos are chasing views, not truth. The real ChatGPT won’t cross that line — it’s explicitly built not to.
youtube
AI Harm Incident
2025-11-07T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzkKHi75fJrhkEiZZF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyInJTjf2X229PGWT54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgztY7UJJW5ubNqaQNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2Ty9QSvQUBZ8a1Y14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbW7IC_xOwPlQuC7p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzCZrK7jkOzUXGBAjh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyNt35Ar0VaUlApizB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdOagKAHb2qb4tvsd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwpXiSrOuxrD99hO1F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOeGjVjVPj3iyzUSd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]