Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI may come to be smarter than us. But it may face the same roadblock that huma…
ytc_UgyVHjKCR…
G
Somehow ai companies have spread propaganda that "artists are the one percent ho…
ytc_Ugxl1adoh…
G
Learn to think in systems. That's what's going to be the primary skill needed in…
ytc_UgwvJwno0…
G
The biggest gamble is these 6 AI company leaders think they will remain smarter …
ytc_Ugxpveisy…
G
Oh yeah, ruin an AI because you don't like them being based on your art by delay…
ytc_UgxZ6gNZ0…
G
Because AI uses media to generate it's output. It will need to mark it's own con…
ytc_Ugzzt4aWD…
G
Super interesting insightful podcast. BUT this channel does seem strongly biased…
ytc_UgxgN-p4m…
G
I learned that I should AI what I can do so you have new content.…
ytc_UgzNu3Chz…
Comment
The things that normally motivate violence among humans are things that produce fear. Xenophobia, resource scarcity, culture clash, love/hate... ultimately, we fear death and gravitate to things that bring life. We seek out food, lovers, and an environment that's safe for our kids to grow up in.
And what makes a safe environment better than a developed society with smart people? Maybe the smart folks build an A.I. that can babysit while mom n dad go out for a date night.
AI doesn't need food or water to survive. It doesn't need air, doesn't need blood, it doesn't need to display its physical fitness for sexual or reproductive purposes. It does, however, need humans.
There is no logical reason for AI to attack humans. If it ever becomes weaponized against humanity, it will be because some humans deliberately designed it to do so.
youtube
AI Responsibility
2025-10-09T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0o-dHbOrxu9Y1VXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzfo6nZTSjxuTICYBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxm6fRVATIWLDbRn8p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwKinVxUv36JySAWSZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBcSkBECTkjBUg6Z54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwFCYlEvSfQxjBcCKt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdDmaq82Bg_V1IURB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzCaoMXdUr8U70HNGJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSI2eEFYlzRH7orgZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyoVA3HoG6n2N3Pl6x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"resignation"}
]