Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tbh, if you use ai to make everything thing for us then nothing will have meanin…
ytc_UgxDdXuUO…
G
my girlfriend is a disabled artist (technically i am too, but my disability does…
ytc_UgxnEEJRB…
G
I may have watched too many movies but I feel that humanoid robots will also rep…
ytc_Ugy82lXaY…
G
And there come the fearmongerers, who completely ignore the benefits of AI for t…
ytc_Ugyce8AMk…
G
I loved this video! The city planning lens is so fascinating. Thanks for also al…
ytc_UgxnBJD46…
G
Steven, I watch many of your episodes, I like most of them. The last few I’ve no…
ytc_Ugy6JV8I7…
G
I wonder who Google is going to give the good AI to.. probably not the Republica…
ytc_Ugx5PnGQJ…
G
So all of AI progression is how to stop the AIs becoming antisemitic? Hmmmm, an …
ytc_UgwT2dWGq…
Comment
It's so good at roleplaying that they induce psychosis and there's been suicides and abuse linked to its use.
Don't use generative AI at all, not only it's a waste of natural resources and a catastrophe for the places the databases are located, it's also extremely dangerous because people will always eventually forget it's just a stupid machine hallucinating shit, once the LLM AI starts interacting back. This interactive nature is the real danger of AI.
Do you really want AI trained on web forums where sickos encourage others to commit suicide to be the one giving you mental health resources and advice when you ask why you feel numb after a loved one's death?
LLM AI doesn't even provide accurate info most of the time, and when it does, 90% of the time it's because there's an overworked person in a 3rd world country behind the screen pretending to be a machine. We *already* have search engines for that!
youtube
AI Moral Status
2025-11-19T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy9I4tudzkpR8Z4KA54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxNHM3LZX7M6cSmZX54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUifKUlcmMvTFUBDh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOUZdO1pM5iy9UpZp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjgQFPaMBkwWc27U94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9xOBdlHjOMjE0HvV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgysQ-QQeiKtxyARXCZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxlk71mo9BETwht4gh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxPKeerW1jnXGDQt6x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxVyq696_vrV770m-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]