Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The defender of chatgpt is full of bullshit. If he thinks he's going to just tea…
ytc_UgyrdJ-oA…
G
So "real" artists used AI generated picture as inspiration for their own creatio…
ytc_UgzE-eqof…
G
Can't even predict the weather one day in advance, now trying to predict events …
ytc_UgzRR8mMQ…
G
I do that. The thing is this AI shit just always "supports" you, and if you just…
ytc_UgwBpj9EJ…
G
“You don’t need my phone” boy stop recording and help that girl out like you’re …
ytc_UgyzHntbD…
G
If AI takes the majority of jobs and it just puts people on the streets, they wi…
ytc_UgzvhhjaK…
G
The most I would use AI art for is as a potential reference (but then again, I h…
ytc_UgyHtxHDH…
G
That is one of the best presentations of what Large Language Models are. Beautif…
ytc_UgwC4217T…
Comment
14:37 imo one of the main issues with chatbots isn’t that they’ll necessarily give bad advice to people seeking questionable advice, it’s the agreeableness. If this guy had discussed taking sodium bromide with a real person it’s likely he would have got some pushback, whereas products like chatGPT are designed to be agreeable & hype you up so you keep using them. Would that have stopped him? Probably not, but it could have
youtube
AI Harm Incident
2025-11-27T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx52CbMQwGNwScQUe54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugznb6Z7K9lX_XSd7w94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzyOvcADeDuJJdVfY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiBW8qpO80y5S3rzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWVaRnqKtZinE7Uv14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxC5Sr4tIHXBLlWTHp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwQDS16T1ri26PDtkd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUp1V0r70MOKl5EaV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8vzMMJgZyYu8Hsu14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzKCBeBH6747eZxGIt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]