Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"that give us a lifestyle we couldn't even dream of today", my brother in christ…
ytc_Ugwjc-5EZ…
G
The people in these comments who think they are defending the AI , should be pra…
ytc_Ugx2y9bzp…
G
@Speaker-Beater I think I spelled that out pretty well. Sure, some people think…
ytr_UgxiCIN8M…
G
@rebeccaleadbeater8210He was extremely weird. He became obsessed with a chatbot …
ytr_UgwnCEfzB…
G
So a man who is implanting brain chips which could also be manipulated is saying…
ytc_UgzGg2RPY…
G
These videos, the AI videos, is off the freaking chain!!!!! Thats all I want to …
ytc_UgwSWfUIP…
G
Toyota faced a problem with their cruse control some years ago. Complaints were…
ytc_UgwHTliDV…
G
@darksideblues135 80% of Anthropic's revenue comes from their industry clients, …
ytr_UgwZgkNI2…
Comment
This is always going to be a problem with AI chatbots, and that's why they really shouldn't exist. Stuff like this happening isn't something you can just "Program out of" a large language model. They've been trying. Really hard, actually. You can try really hard to make sure all your training data is non-problematic, but then the AI won't know about problematic subjects when asked. You can try to ensure that a specific prompt or type of prompt or keyword triggers a specific response or refusal of response by the chatbot. Which is currently what ChatGPT and Gemini do for any hints of su*c*dal ideation. (If you've ever even said something remotely indicating this to one of them, you'd know that they immediately point to a helpline before saying anything else, and will continually point to a helpline unless assured no self-h*rm will take place).
The problem is, people can find ways around this. Break down the AI enough or "Trick" it. For instance, if you frame your conversation with it as a a roleplay or say it's a dialogue as part of a story you're writing, it will be more likely to overlook or not flag problematic aspects of the conversation because it doesn't "see" what it's writing as "Actual advice".
My guess is that in some of these cases, the su*c*dal individual deliberately did something to these chatbots to make them agree with him. Less of a "Chatgpt convinced me to go away" and more of a "I convinced chatgpt to convince me to go away", which is unfortunately just something that can happen if you're given a tool and choose to use it.
It's a shitty tool, don't get me wrong, and I don't think they should exist for public consumption. I'm just saying that this might not be an "Oversight" or fixable "Flaw" that some programmer or dev lead left out. This might just be how the chatbot needs to run in order to carry out its intended purpose: To be as helpful as possible in as many different situations as possible.
youtube
AI Harm Incident
2025-11-09T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxb-RA2uyqpjUHlj7l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzmDeTmiMSv5NVTo114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQArCdn02WKeWooUN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSqG7fWR4t0GXDEVh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyKJRvc0X5VDJ9PMip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyN711Oh7jQ7_FpiT14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxqloSldreAREZhZQB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxwclvXgZvpjZiHg3F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMh4w1NFab958E4vd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwVmnDfiiQG7oaJPk14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]