Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ll take AI creations serious when they start producing images with the right n…
ytc_UgxyvcGfS…
G
SB, what struck me is that this guest laid out exactly why so many conversations…
ytc_Ugwbk90fq…
G
We're entering an era of technological advancement which is completely incompreh…
ytc_UgwgaIXeQ…
G
Funny how AI supporters call artists useless when AI entire business model is co…
ytc_UgwvWBmDm…
G
Ai "art" isnt perfect though human art isnt either but it just looks better and …
ytc_UgzzyAoLA…
G
Wait until Satan@s spirit indwells in AI after we get implanted digital id’s. Th…
ytc_UgyI48L2_…
G
Elon has been very consistent about the dangers of unrestricted advancement of A…
ytc_UgzPhGmCd…
G
NO AMOUNT OF "UP-SKILLING" WILL MAKE YOU MORE VALUABLE THAN A.I.!
Do you seriou…
ytc_UgzFjAkmq…
Comment
Generally I don't think companies should be held responsible for people being stupid. It's just when one can start seeing a trend and they still have not acted they should be at fault.
That being said, I think AI already has a trend and they just fix each single issue instead of the fundimental one. Where most AI models will often be wrong, and all AI models will occasionally be wrong, the companies should be responsible to inform their users this is the case. Which is something I've never seen them do unless certain keywords are triggered.
youtube
AI Harm Incident
2025-11-28T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwzF9zeZ06MzGAwk2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxU-_9auTarJyXTjXh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1bf7r9wcDPUdHz3l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwslIcni6KMOtM3v3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdF9lGGKSwKB4v_Kh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxfiYEqf2_b_2QxdKx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuxSjCKl-aGtGGSfF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3Pi4FxBsxJhhLs8Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwul8TcYLbk1-zVyV14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyhRNyKSG8tULfGYG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]