Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“AI” will never be conscious, but it may get advanced enough that our human brai…
ytc_UgyyDn86f…
G
No worries. Every robot and android had a hidden off switch. Just find and activ…
ytr_Ugy2Wo1LU…
G
If you think human created AI can mimic someone's voice to steal your money, you…
ytc_UgyNKzvBp…
G
This is so stupid. I watched another similar dumb ass video last year by a REAL …
ytc_UgyCMEkWj…
G
The robot was clear: the box or your life
The man did not choose wisely…
ytc_UgyTF_YiG…
G
AI is the world's most expensive parrot. It can only spit out parts of what othe…
ytc_Ugx8QQ3i9…
G
So insightful and entertaining. Best use of my 1 and half hour of this AI era so…
ytc_UgxEa_Nj6…
G
Well, well, well, let's unpack this shall we? No one is born an artist. It's a…
ytc_Ugy1QAPS_…
Comment
So many people in the comments are showing they still don't understand just how exponential this progress timeline is. Yes, AI will be able to self-replicate, self-repair. No, AI will very soon not need humans to have a purpose. Any comforting thought you may have about AI not being that good yet, or not advancing fast enough to be a threat... get that out of your head ASAP. I get it, we're all used to seeing tech advanced at a certain rate. That is over; advancements that used to take years are now happening in weeks. This IS happening. I don't think we stand a chance. Humanity's endless greed and thirst for power is going to be the end of all of us one way or another.
youtube
AI Moral Status
2025-05-05T13:5…
♥ 109
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyMGZUsSiS1-JSPywp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2cdXvcPRQ4FGbGaV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRZi-j8H3FN1Ff38h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxqJ3XbrPrKsk_Fcbt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzudV9fApG3cooptP14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx7h7S57eER8Zb8mkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzq5msMExyIujOm33Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8Ym6MpznIV17sUet4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxnNuntcsm4IBsWCed4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2nES6vEUG3T2Hlwt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]