Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only thing that will be able to out-smart AI is MORE INTELLIGENT AI - increa…
ytc_UgxZFd8jM…
G
They’ll settle into niches. Google will continue to capture the casual search ma…
rdc_oi0g37f
G
The point was that we (weak, like small children) should control AI (strong, lik…
ytr_UgxBwe_ls…
G
It seems like ai art is only beneficial to artists up to a certain point, like f…
ytc_Ugw5kbjcd…
G
There are ways to make art with AI, and I mean true art. I'm not talking single …
ytc_UgwgHhEYF…
G
@SystemsMedicineYeah most AI art is made by corporations which steal from indie …
ytr_Ugxg3pfSu…
G
The problem is we need more than plumbers to run a country. We need doctors, nur…
ytc_Ugwk-zdS4…
G
The competition to be the best in market, best selling, most advanced, most effi…
ytc_Ugw_TLrCP…
Comment
A lot of these comments are crazy. The parents do have fault but there is still nothing wrong with pushing companies to improve AI and its safety measures regarding mental health, along with tons of other sensitive issues, even if there are multiple factors that caused it. You can have conversations with AI bots that are referencing those topics without outright saying the words. I read articles about the case along with transcripts of parts of the conversations, the parents had the right to be alarmed. It literally advised him not to let anyone find the noose when he said he should leave it out so omeone will stop him. And then also suggested helping him improve his suicide note. It's still on them for not monitoring their child but not every minor has parents or their parents are deadbeats. There is nothing wrong with demanding these companies take extra measures to help prevent incidents like this. The company already admitted there are still a lot of flaws with it which they need to improve and also confirmed the transcripts are accurate
youtube
AI Harm Incident
2025-09-05T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyvaqJ1aIyeQsgXn5R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBV-cocy5h6auvEol4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwqwBOi9KO-T9Ju0eF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxp-1tkNPkSVPXYhup4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgynEXodFEiWIsbsetd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEqQT10HSGK3EadAx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxfr2WNxFHJQIBY0id4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydQtmTRzdoGkqYgCx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyDontzbSI4Y0qyGvd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvCUpU34i5Jd5ZC1x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]