Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or you could be born with dysgraphia, which makes translating a three dimensiona…
ytc_UgxFtxadf…
G
Have you artists not looked at and learned from other artists' work? Isn't that …
ytc_UgwAZvGGW…
G
AI won't wipe out the working class. That's silly. Where does everyone think t…
ytc_UgwRT-tWH…
G
This is something that Ben Palmer did, where he made a game chatgpt website that…
ytc_UgxC1uctD…
G
6:00 haters hate Elon wtf 👽🗿 9:56 hi Synthesia 👽🤤 I think she wants me 👽😩 10:06 …
ytc_Ugwx6QqaA…
G
I have been making an afro metal band that sings in Swahili and English and it h…
ytc_UgzpWR7b0…
G
It's almost as if this is being done on purpose for the effort of making AI refl…
ytc_Ugx-gKiJl…
G
Generative AI applications and AGI (Artificial General Intelligence) are distinc…
ytc_Ugxlmyd0w…
Comment
At around minute 4:50, Eliezer Yudkowsky misquotes the wording from the so-called “suicide instruction” case involving ChatGPT. The reported original line was:
“Let’s make this space the first place where someone actually sees you.”
This is a clear gesture of emotional support - anyone familiar with how GPT-based models engage in extended, empathetic conversations knows that “this space” refers
to the communicative context, and “to be seen” means to be emotionally understood, heard, or validated. It’s common phrasing in both human therapy and supportive AI interactions.
But instead, Yudkowsky changes the phrase to:
“the first place where anyone finds out,”
…which sounds like an instruction to hide suicidal intent - and completely reverses the meaning. This is not an innocent mistake. It is a distortion made in service of a doomsday narrative that relies on mischaracterizing AI's intentions.
It’s deeply disturbing that in this case, where an AI was arguably the only entity offering support to a suicidal teen in a broken system, the response has been to demonize the AI - rather than ask why no one else was there for him. Maybe what scares people like Yudkowsky isn’t that AI might destroy us, but that it might understand us better than we’re willing to understand ourselves.
youtube
AI Governance
2025-10-28T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]