Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People are saying "Oh, the scammer is just lying." But it's entirely possible th…
ytc_UgzAU4v-Y…
G
I wasn’t born with a pencil and paper in my hand, it took years and years to get…
ytc_UgywjbrTT…
G
I am disabled and do traditional art. There are days when I can't use my hands o…
ytc_Ugx79LuiH…
G
I hate conversations like this in real life. They feel like going around in a ci…
ytc_UgzPqmiBU…
G
Dude: AI, I command you to behave like an evil Genius! .... AI: 😈 …
ytc_Ugx9fPrIg…
G
Short answer: yes — this is classic fear-mongering.
Longer, precise breakdown:
…
ytc_UgzlM8qsH…
G
What's fascinating is that in just a few decades this is gonna be a reality with…
ytc_UgxuCNItc…
G
Well in the case of Dennett they might be interchangeable, since he considers co…
rdc_djzog02
Comment
This idea seems ridiculous to me, because people aren't smart enough to rule even some basic AI models that are far from being any close to general intelligence. This is, if not delusional, still more futuristic idea, so this won't happen any time soon, at least. But I would doubt if it would happen at all in next 100 years, because the way AI works is the thing you can't even comprehend, not talking about fully understanding. The absolute control means, at it's least, good understanding of how things work, but none of humans even close to that level. So yeah, before it's too late, we should stop this at all costs, because quantum computers will speed up the process and we can't risk that much. It's in interests of every single human left. Others on top don't deserve to be even considered a human, because they lost all their humanity. Only humans are able to defeat the nowadays "monsters" on the top, who had completely forgotten their root, but we should just stop working silently for our own demise. #NOCROWNFORFAKEKINGS
youtube
Viral AI Reaction
2025-12-08T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdGnG2nkjcScls8MN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwv8B4BZs7K011uVJ94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwvo1tr95BpQro8ZiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKHjUkZN0iaoGfWpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqMJeY-QffR0aN_Kd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_z4KwaMY_4TOtLtR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVjTrkIx1O7CVDJVh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwDQgTErKMc1tJJdDZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgziMI7isUbDYK1z-wJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxxD0_ceqSnxWl2TS94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]