Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How do you find out if your art has been used for AI? I wanna know if my art has…
ytc_Ugxvk2d5u…
G
Happy to be in the custom automotive paint and body market. I repair and paint e…
ytc_UgzJQm8_e…
G
The rich doesn’t want middle class people anymore. They want ai and machinese t…
ytc_Ugx2SIzzq…
G
Bro.
I just researched abt it and i found that it makes real looking videos up t…
ytc_UgzdCnvC1…
G
I don’t really know how to digest this AI stuff. It’s seems a little scary but t…
ytc_UgzzFWJ0m…
G
It seems that Gödel's theorm effectively proves that alignment must fail once su…
ytc_Ugy62p-Ao…
G
The echo chamber effect has been going on for numerous decades in the domain of …
ytc_UgxsnC9T2…
G
at least anthropic understands this.
models have already demonstrated their wil…
ytc_Ugw7pvc-X…
Comment
So glad to see another Chubbyemu video! As for what happened to AJ, his case demonstrates why it's such a bad idea to just blindly trust an answer from AI, especially when it comes to anything about health, since AI can be wrong (in fact, it's wrong frequently). Maybe the video title should have been more like "A man asked AI for health advice and because he trusted it too much, he took something that cooked every brain cell" (but I see how that would've been a bit too long for a title). I also think it's funny as hell how the ChatGPT AI kept denying that it had given anybody advice to take sodium bromide. I'm glad AJ was okay in the end, and hopefully he learned a lesson about not trusting AI too much (and I also agree that some of it was on AJ for deciding to take sodium bromide in the first place, because as Dr. Bernard said in the video, AI didn't hold him down and force him to take it). Great video, and I can't wait to see the next one.
youtube
AI Harm Incident
2025-11-26T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxL1wPpxnFh4A_wKoN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzoTwDjlRZkt60euMl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz5GycpaaR9b1fKcNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiZ2iBj_awRMB-XSd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw90k1sA4VcITDq-dJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrmsOItWCgKTO6b3p4AaABAg","responsibility":"parent","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzxZGgs6zEnK5UHQvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyu5nO9VkBTPvSWBuV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyEF5Df4yg-M8fLojJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfoINCuwy8Z3EtX6J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}]