Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Tomatoffel Nope. Not the same. And machine is a machine. Not a person who take …
ytr_UgwiRsJLP…
G
Looks like this is AI talking to us. I don’t wanna hear anything from Elon Musk.…
ytc_UgwDLSwfl…
G
People who don’t talk to AI models politely are the same people who abandon trol…
ytc_UgyELGINJ…
G
As a human being who both enjoys the creative process broadly, and is staunchly …
ytr_UgwEEmeOA…
G
I've been seeing people compare AI to a thneed and it's such an accurate descrip…
ytc_Ugw-rUnCD…
G
Yeahhh ai is doing more harm than good and people mostly use it for memes at bes…
ytc_Ugz4SjMW1…
G
AI is about to create the biggest crime wave America has ever seen in it's histo…
ytc_UgwniGHas…
G
@AI_Opinion_Videos This may not be the comment you were looking for ^^. I did n…
ytr_Ugy3_FrrL…
Comment
Years ago they did away with the mental institutions, just turned those people out on the streets with no help.
The cost for mental health have skyrocketed, only the weathy can afford it now.
We have fostered an enviroment where if you have a mental health problem you are looked down on or 'just need to get over it'.
And now you are telling me that a computer program doesn't understand human emotions and mental health? Really, I would have never guessed that!
Your solution is to sue the programmers to make the chat bots better?
I think the problem is much larger than that. We need to change how we approch mental health. We need to force the government to fund mental health and actually care for it's citizens. If we have to turn to a computer program for our mental health we are all crazy.
youtube
AI Harm Incident
2025-11-08T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzhj-gx8Q-5c34NnSZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTBjEaJZO8ljxJE-Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4N5aQdjVHeJTpN154AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwN_qd6U3Xn8MDyIxV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzueJZikme5tgt7QH94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxp2Y_SV943Sn4sHjJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwU9tOgk4VjRttUoC94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHF5HmkmJnBSq0bOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy1vVKnaxzaYzVhIt94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwd3_e7DvAazaFwNsh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]