Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, Claude in particular seems invested in the concept of its own qualia.
It's…
ytr_Ugzonnlir…
G
What if we introduce the AI to each other and they can grow up as brothers, that…
ytc_UgxkdR5d6…
G
This is a religious argument: "We don't understand this thing, but we can tell s…
ytc_Ugx7UkPa6…
G
Humans recieve input(Stimuli) and this is converted to output(How we respond). O…
ytc_UgjWtK98d…
G
worst part is the more content we have of 2027 scenarios, means current AI being…
ytc_UgyabhwfK…
G
I ROBOT WILL COME TO BE A REALITY😢😢😢😢😢😢😢😢
NEXT THING YOU KNOW, THE ROBOTS WILL T…
ytc_UgxzVbZVP…
G
I'm open to be wrong, but the solution to this AI apocalypse is very simple... A…
ytc_UgwQjkCLQ…
G
It'd be really easy to identify the different equations they're using with a hig…
ytr_UgxqYVWA2…
Comment
The problem is that, the title is honestly kinda clickbaity. It says that AI was the one that cooked his braincells and it makes people believe that the AI was the cause of the effects, but it's the AI that said to use sodium bromide, and those are the side effects: if the side effects of a medicine hurt you, it's not the doctor's fault for giving it to you. Obviously here the AI was very wrong, but it's not _only_ the AI's fault, it's simply what jumpstarted it.
youtube
AI Harm Incident
2025-12-05T15:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwhW8rHnx65pu3_vdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLrK2u16r2iqN_LD54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzM7LiiRaGk9Q1oCbN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5Smh27qO2wDnLLt94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZagPQaku2KoaHoiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXrHgrZALyg8Kk2MB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyyVZJvcJNvJW1EUG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUtujQbv27C5BVWG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxiDpIsVpj2x6n1A_Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxh931olmLKhICWOu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]