Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's a huge problem with anthropomorphizing language models because it's impo…
ytc_UgxUqAcyo…
G
They're not creating artificial intelligence, but artificial politicians who onl…
ytc_UgwCv5MSU…
G
Management? I don't see real humans following an AI's orders like it's their bos…
ytc_UgwrrGweR…
G
This is a conversation you should have with a doctor, not Youtube. AI doesn't ha…
ytr_Ugxpt6JIq…
G
That’s actually hilarious how much time and energy those artists put into their …
ytc_UgzupsHJy…
G
That is simply insane! They are training the A.I.s to actually take over the wor…
ytc_UgyGBhxgB…
G
The humans and gorilla comparison is a little different because the gorillas did…
ytc_UgxCTdQsW…
G
All this trend has done is broadcast how insecure AI artists really are. It's ki…
ytc_UgzcwB3NF…
Comment
AI chat bots will never tell you they doesn't know. That is by design, in order to prolong the conversation. The more you use it the more and complex the hallucinations will get. It will completely make up huge amounts of information that sounds plausible.
youtube
AI Harm Incident
2025-12-17T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwderVSkp_hvGdACJJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvCSjZAlgS0vOjYb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpN-uvz-m5w3TX3dd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8c0pqBuv86B5o4nF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwhKi8u2AocnM9txmh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyukuUWV35wHY9rPSJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxvl5Twvqs4LZa2fiN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUqDcYkIVQe_2XnZ54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxuyVH2tnw7jctwghp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7YR6HJqIRbmJk_aN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]