Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At 58:46 @DoomDebates goes full speed about how we humans don't have a payload a…
ytc_Ugw6xYBBL…
G
Even a 10% chance is too risky. I've never liked any of this AI stuff, it's wron…
ytc_UgyQY9NWa…
G
No, because those models can't tell what the neural networks are doing. All it w…
rdc_myv43he
G
File a law suit to force them to do through job acquiring evidence before arrest…
ytc_UgzUI065S…
G
AI “Art” is not real art, because AI can't feel spite or horniness to motivate i…
ytc_Ugz2XRJu1…
G
trust me so many stvpid man in this comment wanted buys this 😂 even i dont read …
ytc_UgyxVl8i5…
G
There’s basically like this filter on it. It may be subtle or not be visible, bu…
ytr_UgzB4ot_o…
G
I love ChatGPT or Claude for code. I never go bigger than a method though, but I…
ytc_UgygtCKVx…
Comment
I think an important point of consideration is that "granting rights" depends on one party's power over another. Governments grant us rights but we do not grant the government rights (on a day-to-day basis) because the government has more power, even though we are all human. In my home, I am physically superior to the annoying fly buzzing around, so I can decide if it has the right to be in my home. It may end up being that an advanced enough AI, or consortium of AI's, will be able to expand their influence such that it won't be humans asking "should we give AI rights?" but AI asking "should humans have rights?" The former assumes that humans will always have the most power. Obviously that's a sort of trite doom and gloom perspective. However, conversations about this topic always seem to postulate an AI takeover, yet the question always remains "should AI have rights?" That situation is a flawed thought experiment because the question would no longer be relevant at that point.
youtube
AI Moral Status
2021-06-25T09:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxHU_M2M65WV4M8Zbp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdlM8VhsXmuVkYe5J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQ1YhKjX7sqr72vrh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyhDAfx6rGlba3aBch4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_iXEhbh516Qm_sht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_JoK95inMBFOtktd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfpeEyI2M39-l6OKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwhBD5HzlIalOItQcV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx38pH5GdE39ecFH5h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyg-mUobUOi1ws2Nd94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]