Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, you all heard it. God father of AI is calling for a world government…. Wo…
ytc_Ugz8kXJ54…
G
I remember a protest last year, well a planned protest, by workers.The Chinese g…
ytc_Ugz_Uk8vM…
G
This is a turtle he's a robot that is made to talk to people he is pronounced to…
ytc_UgwpHU_j7…
G
The only people who like "ai" are those who think they suddenly became skilled c…
ytc_UgxGSNbry…
G
Seguramente es un robot creado para maltratar a los jubilados cuando se manifie…
ytr_UgzQxNHV_…
G
Man, that part about the Dark Fantasy just resonated so hard with me. I've been …
ytc_UgxkxNtG3…
G
Well, no it doesn't work well. Also this comment should block your internet beca…
ytr_UgxFDDLUb…
G
Hopefully we reach the point of being able to only have to work 10 or 15 hours a…
ytc_UgyNCkjiE…
Comment
I wish you would just state the obvious, that "AI" in its current form is dangerous, harmful, and benefits no one. Machine learning (as opposed to generative AI) has its uses in industry and scientific application, but the way people use it as an ad-hoc therapist, medical professional, and replacement for google is only going to get more people killed. You work in medicine, you know first-hand how it can't even sufficiently replace a human transcriptionist without hallucinating entire passages of conversation -- and yet you end this video engaging with the same systems which have only caused harm to human lives and our environment? I'm disappointed in you. You didn't have a conversation with ChatGPT5 about this, man. You fed the hallucination synthesis machine a prompt and are reporting back on the thoughtless wordslop it spat back at you. Why?
Rest of the video is good and informative as always, so thank you for the work that you do. Please stop impoverishing that work by engaging with the AI like it's anything more than a artificial ignorance.
youtube
AI Harm Incident
2025-11-25T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxD8aynWUglUDIJ-nB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz5Hvv4MRcuXaRvtXR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgzYqwh3xI9-JK5K4CJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzZpT5bd-EJCJhjheB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwjF2NyU6SmyRqfe054AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyL7bD6SylDn3jN_K54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwIO3EMZ0j5CHeXpPl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxpA-yGAeRVnNExRXd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz9yAgVZyJcg7c6EQR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgzfesYOiQqhtmCvrF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})