Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
that first ai video . the hair aint bouncing and its not letting the background …
ytc_UgxicaJeA…
G
i am 62 years old.. this is what i know. artificial intelligence is creating…
ytc_UgxqlLY07…
G
No mention of AI being a threat to India.
AI dies CRUD well, does it not?…
rdc_mokzjle
G
rediscovered this after the release of sora 2 and i still love how the internet …
ytc_UgxqSPeUu…
G
Why people talk to AI. You can talk whenever you want, about whatever you want,…
ytc_UgyGdCZHP…
G
Even if artistry isn't a talent, it IS a skill that (as you rightfully pointed o…
ytc_UgyLGhE5T…
G
Vous devriez essayer de protéger vos intérêts en signant les pétitions contre l'…
ytr_Ugykc0Lth…
G
As I'm watching this, I'm also wondering if Jake T and Prof Hinton are AI genera…
ytc_UgxBydS8d…
Comment
So, You aren't going to agree with me and that's fine but as someone with similar conversations in my chatgpt history let me say that suicide hotlines exist for people who don't understand the appeal of suicide can say "oh they could have and should have called the hotline." Do you think people don't know about the hotline? The problem is what could the hotline possibly say or do? People know and don't call because why would they want to get in an argument with some suicide hotline worker who is essentially ignoring everything they are saying and then have to worry about "oh they called the police, well my life is about to get a whole lot worse."
Suicidal people want to be able to have real conversations about what's bugging them and they cant do it with the people they know who get overly emotional about the thought of them dying, they cant do it with a suicide hotline because it isn't a conversation its like discussing religion with someone very religious they are never going to even admit to understanding your point, places on the internet that people can actually talk about suicide regularly get shut down, so yeah you settle for a convo with something you know isnt real. It irritates me how much people essentially aren't allowed to have a conversation about things they are thinking about and then people are like "wow every time people try to have an outlet to speak their mind on this we make sure they cant. I wonder why they are so lonely"
It will please you all to know that actually chatgpt is way more annoying to talk to about stuff like this now. You say something and it constantly tries to tell me to talk to humans or otherwise redirect so it's like "wow that's a sharp point and one shared by the philosopher [whoever]. Do you want me to summarize the general points of his philosophy overall?" No, I wanted to talk about what I was talking about, not some random philosopher. So yeah, I talk to chatgpt about it less. Now I'm back to just sitting by myself and thinking about what a scam life is on my own.
youtube
AI Harm Incident
2025-11-08T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw4YDq4wN0QKVa5KRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyPmsR-M_HYP3jGMUN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEgAMfdyTEsOoot4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"sadness"},
{"id":"ytc_UgwNMeFt302OzbYEWNN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzT4DQohTQXG0Jakct4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx3OuysOBhnzrtI-gB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwfSSN0fJdbjqg9Wnh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSLohrpIkb03ppA3d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyHXMh_6qbLjUOiyD14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwAyYhMSMKQIROC1kx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]