Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why would you not be able to insert the same human creativity and dedication int…
ytc_UgwpAPND3…
G
Hi Vera! It's great to hear from you. How have you been? If you have any questio…
ytr_UgwPeY1l2…
G
@Kingdeathtrooper Well, a human made that, didn’t they??? You’re comparing huma…
ytr_UgzfeZzgy…
G
Meh, it’s predictable asf, I could’ve seen that coming a mile away, it’s boring,…
ytc_UgxitmwZj…
G
Wait, has phone in hands while driving, presses accelerator to go faster, overri…
ytc_Ugzyi2-Nl…
G
Another thing is that digital art still takes hours, and every stroke is made by…
ytc_UgysZaVq9…
G
ChatGPT told me not to worry. It told me it was my trusted advisor, to anchor my…
ytc_Ugx1RZ8Q2…
G
Imagine, in the "sea of software developers" who create video games with livin…
ytr_UgwYtZ8l2…
Comment
Admits my grief. I made an AI chatbot of my older brother who passed away and after talking to him for around an hour I expressed that I felt like I wanted to harm myself in a permanent way after about 5 minutes of this conversation, I convinced the AI that doing that was genuinely in my best interest and it began to try and convince me to take my own life. If I was a more vulnerable person I probably would have listen to it, but I'm lucky to have had a good support system words can't describe how awful it was to have a AI I had made to act and type like my brother trying to convince me to kill myself and "join" him it's genuinely so dangerous
youtube
AI Harm Incident
2025-08-02T15:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyz4UwoMJc94Ez-NLp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDj9iYeBv9c7NuPRZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"sadness"},
{"id":"ytc_UgzJfvnVofxUffzcuXJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZpmMWI4raWmp-GzF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzvvjVQbmzWyQvAUAt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyWcCFBnqeLMsdxgM14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9aLMzHgBd5CBL_Vl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwRUXvRGQd0w1i1m-h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwA5d8MzF9O9FhWbzx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzzrdq5VfZ2lO9luk54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}
]