Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was conducting research into AI and it failed miserably. Someone in the commen…
ytc_Ugz1kgOeb…
G
Most AI solutions (ChatGPT definitely) are already annoyingly nice, like they ar…
ytc_UgxNHM3LZ…
G
It just feels fake though. I can't stand things that are fake. I know when someo…
ytr_UgxuiwOP1…
G
I don’t see a reason to celebrate a line following robot. The difficulty in this…
ytc_UgzDbG5Iq…
G
what did yall think AI was gonna do. doesn't matter if you're Left leaning or R…
ytc_UgwgYWbrC…
G
There used to be a problem with twitters picture cropping algorithm where, befor…
ytc_UgzzoMn7r…
G
What do you think will ai replace doctors??? If yes than until at which point wi…
ytc_UgynvGhxR…
G
AI art slop vs a real art is like a soulless content farmer vs a real struggling…
ytc_Ugzfvw-Kz…
Comment
SO much False Equivalency! I've loved every Kurzgesagt video, except this one...
There could be an artificial intelligence right now, but hiding somewhere. If you were to 'turn it off' without it's knowledge are you guilty of manslaughter, or genocide? No.
If an artificial intelligence were to kill a human, would it be culpable of murder? What punishment would you give it?
If an artificial intelligence were to kill my dog, what would be an appropriate compensation?
This entire video is coming from the perspective that artificial intelligence HAS a right to 'rights', without even asking, "If an artificial intelligence has 'rights', what obligations does it have if it deprives the rights of other 'beings'?"
I mean; we can't even decide on those 'rights' & apply them appropriately to our own societies. Are Drone attacks on foreign soil acceptable, even if they kill innocent bystanders? Now there's some REAL questions Kurzgesagt should be asking!
youtube
AI Moral Status
2017-02-23T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiyjzCTc8g_oXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgiKV5roAM8drngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghulkD-qy2L3HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Uggnize15yoAyHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjOlPQd5Ca5sHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UggcGK52nAlrHngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjJbiJBPUbWdXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi26oYgcaYTAHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiPFrZsBn3iMXgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg9RewNiCIchXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]