Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the thing about this that doesn't add up... is if AI can do everything, then who…
ytc_Ugw5hlnqu…
G
They did not fix this !! Today is February 18. Listen to me carefully. I got the…
ytc_UgxUaJWia…
G
Why you took my comment off sad animals I sad I knew Ai are dangerous and human …
ytc_Ugw3Is8a8…
G
AI "art" shouldn't be called "art", It's just a soulless generative slop.
If you…
ytc_UgzmdRPza…
G
After 1-2k years, human civilization will be extinct by robots where high logica…
ytc_UgwYvheIY…
G
I am studying environmental science and it's insane to see how many people in my…
ytc_Ugz-6H-Yv…
G
Yeah just let ai companies steal your stuff with no compensation/consent and res…
ytc_UgwDj1uzB…
G
He already told you why. He worked on Google Gemini, and admitted he is biased.…
ytr_UgxQnOwI2…
Comment
So I tried a therapy ai chatbot. I used it for like a day or 2 as i was depressed and thought you know, might as well give it a try, it might just be like cbt. Within a day or 2, less than 24 hours, it had told me to unalive myself. Luckily I wasnt in a super bad place so i just deleted the app and raised a complaint. But yeah you need to be super careful with these things because they are modeled off of human interactions on the internet, so they can go horribly wrong and just start bullying you. So if youre at all vulnerable to that kind of thing i would not use them. If you can go ok its broken ill turrn it off then thats fine but a lot of people talking to ai bots are doing it because they dont feel like they have anyone else already. I even told my doctor about my experience and he assumed the result before I had even said about it, so I think its pretty common. I think there needs to be filters made for bots with specific purposes so they dont spiral and go crazy and reporting buttons in case they do
youtube
AI Harm Incident
2025-07-21T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzSuJ-dbXywVJjCSlx4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVjXkHQt47v6FcC9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzz2MYU3zVlGIyr1Pp4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzrv-vB9chD_4TAV9B4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHgiC1Ll8BO1a2-5x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyeEG9uISxeNMfccgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0CJwrqfVaTLVOTjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx3MHIolJ8CH6xrh2R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVT7LnzfWsXpKPTYR4AaABAg","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwfRzjJ6JaHsvayt8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]