Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro, why are we calling it AI “art” just call it AI slop, that’s what it really …
ytc_UgxaCwo0C…
G
It's not just boring, it's bad. AI ''artists'' aren't artists. 99 percept of the…
ytc_Ugydu4P-h…
G
12:45 Yes. But Open Source open cog built 'by the people', 'of the people' and '…
ytc_UgwZtKfA6…
G
AI are build thanks to data created by billions of people. The resulting models…
rdc_je4pkwn
G
A few weeks ago I ran into a chat excerpt where an AI was enthusiastic and compl…
ytc_UgyKfAy4J…
G
AI won't take over almost every job since the AI can't and won't do it better th…
ytc_UgwS5srNf…
G
not surprising considering their take on AI usage, even willing to use AI to rep…
ytc_UgxAlyJhD…
G
The truth is that we don't need AI to destroy humanity, we've been doing that fo…
ytc_UgwoVKvMC…
Comment
If AI warns you about dangerous things with big flashy warnings, why doesn’t it give a big flashy hotline for people who want to off themselves? When you ask about how to ti3 a n0ose, and you say it’s for research, it still shouldn’t give you an answer and give you a hotline — yet it doesn’t. This is why no one should use chat bots ever. They are unnecessary, k1ll dozens of people at least, and they isolate people and make them lazy. You need to know how to look stuff up on your own and fact check even the information that chat bots give you.
youtube
AI Harm Incident
2025-11-25T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxhyDaa-KFthAQ4Ogd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyYC6ilISF84-nUnx94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxAfH-86P58-_92--R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0hpRF7vf7FLtVQyZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-VlTDgv0kUvYPu6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgznemhcWlKshrU5fuB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyelJWlY0t7AChOY8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJZhe3aP4llj6OPIh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxU4TA9E0xi03O8Lj54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1BK3RXg0q8A51TCp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]