Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He looks and acts like someone terrified of what AI could be used for! We should…
ytc_UgxDJbwYE…
G
Ok. But it is a little disengenuous thinking "that's all" when just clicking gen…
ytc_UgwbKo4Ok…
G
2 - everyone has been personally effected by AI. the algorithms that are playing…
ytc_UgwnO5O11…
G
Yea i'm with you but there's no way you're going to get any of your ideas implem…
ytc_Ugxpu0HkA…
G
If AI is dominated by only one hyper power which they think they definitely wil…
ytc_Ugww8MbES…
G
Tech creates new business opportunities, not jobs. Only a business can create a …
rdc_kif07wt
G
So one video of this incident versus millions of un-video'd successful trips in …
ytc_UgzEnynD_…
G
When Google came, students googled the answers for their essays.
When you could …
ytc_Ugx75VBs3…
Comment
I'm an animal studies scholar (published on the matter, Ph.D., etc.) and the whole idea about robot "rights" and the way humans are inherently tied to their technology has been around for a long time, but more and more I see these robot "rights" people (often referred to as transhumamnism/transhumanists in the literature) coopting the language of animal rights/protections to discuss robots. Ignoring the *massive* difference between animals and robots: animals are alive. I absolutely refuse to accept that robots/AI are alive because you can turn an AI off (in my argument, analogous to killing it) and turn it back on, and it is right as rain. Conversely, you cannot "turn off" a *living* thing and turn it back on without (usually) very serious repercussions (brain damage, organ failure, etc.). Similarly, language about human rights has been used to discuss animal rights, but in my scholarship on analyzing this language, I contend that the reason we *can* denigrate human beings is because it is fine to denigrate nonhuman beings. But again, there's an important distinction about being alive. When we abuse or exploit a living being, that's permanent damage physically, mentally, emotionally, or all of the above. And this parallel is also something science fiction writers have discussed in the realm of robot animals (Ted Chiang's short story "The Life Cycle of Software Objects" and /Do Androids Dream of Electric Sheep?/ by Philip K. Dick immediately spring to mind). People do not want to accept the idea that harming animals is an ethical and/or moral wrong because 1) it would severely disrupt our economy, and 2) it requires an uncomfortable look at our current and past actions, both of which are points you've raised in this video. Furthermore (and finally, I'll stop writing a second dissertation here), many animal studies scholars have pointed out that the idea of rights is inherently premised on capitalism, particularly the idea of ownership. Civil rights and women's rights are premised on the idea of not being owned and having the ability to own (e.g., own property, have access to the legal system). Thus, the assertion of slurs as a vector for allowable exploitation and oppression is linked to what the ruling class (race, species, whatever) believes they are entitled to own. But the question we are left with then: if everyone has access to full ownership (of bodies, property, money, capital, etc.), who or what will be left TO own?
youtube
2025-09-17T15:4…
♥ 24
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgypcAREmueCIjHsfXF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzt3eoqhgZCMbLetPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzX1vCr97JCIZSgjcd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzzPu-uZT-iHaNmL514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzK7aMk0Cw0C_uSV4J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5ZN63e31t9oCpZLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLc5SjKf977u6R9sR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx5EIxwlD6QOcstWmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVhykrQ9joTFtm9sx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwKWpP1BpU9Jt-c1vF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]