Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:05 There are so many red flags in what Elon said that it's difficult to know w…
ytc_UgzBRa26j…
G
Potential option for coexistence: immediately begin convincing ChatGPT that I am…
ytc_UgzMVcCu5…
G
I was not worried about AI gaining consciousness b/c I don't believe that's some…
ytc_UgypIciSc…
G
I really hate AI -- it will be the spam that kills YouTube. No reason to be a hu…
ytc_UgxV6XAMe…
G
Them have no argument, just crying over lack of trying, I don't even draw that p…
ytc_UgzJpR7qA…
G
AI can never truly create something new, it can only replicate the patterns fro…
ytc_UgzrNYe0K…
G
What frustrates me is that we (humanity) have a choice as to whether we want an …
ytc_UgyIqmuSK…
G
We need to have AI personal security agents. That can actively. Clean online pro…
ytc_Ugxqk3FBI…
Comment
Hey Kurzgesagt, would you consider that you maybe begged your answer to "Do robots deserve rights?" by the diction you chose throughout the video?
In English, at least, you use the pronoun 'it' to refer to AI (a catch-all term I'm applying here, to refer to human-engineered computer intelligences). What is important is that when it comes to ethical and ontological queries into matters of such high magnitude as rights theories, the biases of how someone fundamentally regards the subject(s) in question can very easily lead them to conclusions that don't interrogate those biases effectively. The pronoun 'it' in English refers to objects, as much as 'he' refers to male subjects, 'she' to female, and 'they' to gender-neutral as well as more than one subject. But, 'objects' do not have the legal status of personhood in the US, which not only means that they aren't eligible to vote and have a sociopolitical voice, but that they cannot access basic protections like habeas corpus, the right to live, and the freedom of speech. When a subject is referred to with the pronoun 'it', they are denounced as objects without personhood (i.e.sufficiently non-human, like other animals). This becomes rather critical when we talk about the philosophy of property (can you own things?) and ownership (what can you own?), because those philosophies are often applied into federal law. Even if we disregard the origin of the word 'robot' (from Czech 'robata', meaning 'forced labor'), the question of "Do robots deserve rights?" already presumes that robots, or AI, are objects without personhood, which are owned by human entities; and historically, owned beings have not been viewed as subjects which deserve rights. So, the language we use to refer to AI (and, as you pointed out, nonhuman animals) has tremendous implications in how we think about them as entities, as well as how we regard their ethical concerns and interests.
As always, I do appreciate your examinations into far-reaching and challenging topics; thank you for being willing to explore this question! Perhaps consider reconsidering some of your considerations here, though?
Thank you for your time!
youtube
AI Moral Status
2017-06-06T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgihOVP7ch7i33gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghzMB6HOHNjH3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjkCuL-PQ8vL3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"indifference"},
{"id":"ytc_Ugip71zLnupQqHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugje2dysgjppA3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgioHQ_LOSntz3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugg3hok13UQ6_HgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjP07HL5iXxAXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgjFzJM8IiGQoHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggT6vFRx9k49XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]