Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder if these two would talk if you left them in a room together. The female…
ytc_UgyL-Tox4…
G
Well what's your alternative?
If you want to do something like tax ai pulls or …
rdc_mr2xwkp
G
Material science advancements. The rate of discovery has been painfully slow for…
rdc_ogtsoi2
G
Also the only solution to not have to deal with this issue or a possible war bet…
ytc_Ugi2Je4vG…
G
Ai wont take creative jobs. Cause you can tell what ai makes is so fake…
ytc_UgxKJ_Q1k…
G
How about replacing news channels with AI agents who would do damage control for…
ytc_UgzLDOQLd…
G
@pedrolopes4778 the issue is when ai gets to thw point where it is entirely ide…
ytr_UgzRzp7WM…
G
And somehow people are surprised that AI models trained on human data act like a…
ytc_UgwQLkqgO…
Comment
Let's be honest, ChatGPT does not talk like this by default. Zane must've given it explicit instruction or even fine-tuned its model this way. At this point, ChatGPT is no longer a party to the conversation, but an imagination friend. People always try to make AI (LLM) into something new, but it's not. It's an imaginary friend that validates the user's thought. You could argue OpenAI was negligent in their product design, but I think it's hard to show that ChatGPT was the reason, or one of the main reasons that Zane took his life. Would he have talked to his parents or friends or counselor if not for ChatGPT? Probably not.
youtube
AI Harm Incident
2025-12-31T13:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUXU02HXE27oiPAtl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8JAAxWvUdBb5xDcV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyaMOV9P9WOrfg0nXt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBjXfW1AI1I3RKBdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx2MKOOYgJQMB-j9gt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy241szIRmQQBGxxrV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz6_b0jC2MepbVp-tV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwSwR3OQi9LYGKLbYN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwlsbfZmZVIzrs7rHd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwuXN8dTA8fK56fTO14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]