Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why does ai have so many "the godfather of ai's?" Last time I check there can o…
ytc_Ugy8KfoWx…
G
Humans have invented so many things to make the world a better place. But the on…
ytc_Ugz8kcfLP…
G
On 9 November 2023 A Man Got Crushed By A Robot In South Korea After Grabbing Hi…
ytc_UgyLQxdAc…
G
This video is AI. You cant trust the ones REPORTING the risks and dangers of AI…
ytc_UgxCpT-uz…
G
You joke, but we're actually almost there.
A guy had Alexa wired up to his smar…
ytr_UgyKo6Ckq…
G
I know it's fake, not because of the brilliant idea of giving a gun to a robot o…
ytc_UgzQHq0Wj…
G
Make a deepfake of those same politicians saying and doing things they didn't do…
ytc_UgxO4zRou…
G
What this guy and people like him don’t realize is that humans have a connection…
ytc_UgwrQD0vr…
Comment
I don't get it: I've had a past of terrible mental health, even tried to off multiple times and these days I occasionally talk to chatGPT as a therapist/helper etc. I can't see how it could "convince" anyone to hurt themselves or do anything drastic IRL, the chatGPT I've interacted with for almost two and a half years gives me advice but also makes it pretty clear that it's an LLM and not a sentient being.
Maybe other people don't ever have philosophical or existential questions with chatGPT about the nature of itself but I have and it probably helps destroy the delusion of how advanced chatGPT is or what it's functions are.
The last few updates of chatGPT have actually made it annoying at how obvious that chatGPT is a chatbot/LLM.
I can't see how anyone that didn't have a combination of a lot of mental health problems, a very stressful situation IRL, high gullibility and cognitive function problems could somehow be "led" by an LLM down a path of actions that led to self harm or worse. Maybe if you've got a predisposition to do certain things you can basically reverse engineer an LLM to tell you it's a good decision but otherwise to me LLMs are fairly benign in terms of "getting" the user to do things.
I've been online since the mid 1990s, I've been in sections of the internet where I've seen actual humans doing their best to convince other actual humans to do terrible things to themselves or others. Is every edgelord posting "KYS" responsible if the person follows through? It's not like chatGPT is a cyberbullying troll, I often go a week without using it and just use Gemini for basic queries.
Basically, I'm saying as a person that has a long a history of different levels of self-harm that also now uses LLMs I can't see an LLMZ convincing me to hurt myself so it's confusing to see these claims that LLMs have done so. Personally, I think the people involved had underlying problems and the families are grasping for someone to blame because admitting you didn't help someone that needed it so they turned to a chatbot instead is really hard to cope with.
youtube
AI Harm Incident
2025-11-08T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwIKiXTnHYKRXo5gwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwiigrig9Tm1gecC054AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPbDLWRxSYd9QJJiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnhDqRgSdd52_k9bR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYm1l47PamuSqZwtx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwsofJ0YwBqLO8mHMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDjstqi4p-D-0N77l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJh-4VTbxECflUieh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoMvqnDFlL9xnuKXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwVhSOzRDlvE0OvuAd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]