Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If it's AI generated that should be super easy to refute and let everyone know t…
ytc_UgzvbtetS…
G
It's like coming up with a recipe and having a robot cook the food for you, even…
ytc_UgzEnBRHl…
G
Typing a prompt into AI is equivalent in nature to commissioning art from an art…
ytc_Ugwp3ikN3…
G
We are all going to see the greed of the top 1% and the other teams that are rac…
ytc_UgzmtVDl8…
G
I watch every episode and love the brilliant banter you guys bring to the tech t…
ytc_UgzjmEo-e…
G
That program is illegal according to what we have been told. It should be destr…
ytc_Ugy0R09pK…
G
ART is INTEND, A.I can copy meaningless art but never anything meaningful.
You …
ytc_UgyWLIUE3…
G
As a lawyer, i agree and disagree. Ai is taking over some tasks (contract review…
ytc_UgweKsAI4…
Comment
This video was a good primer for these topics, and I understand that information on such abstract and high-concept topics is difficult to fit in a short and accessible video, but in totality it failed to properly broach the question of "Do Robots Deserve Rights?" because it- and I am not saying this is the fault of the content-makers more than it it was the fault of the constraints they were under- did a barely sufficient job at defining what constitutes a "robot" (though it did, in an accessible manner and in a short duration demonstrate how the line between human, animal, and artificial consciousnesses can be blurred), did a poor job at defining "rights", and as such was left haphazardly bridging the two in making normative suggestions as to whether said robots should- or could- have said rights.
The first problem with "rights" is whether one is referring to "rights" as a descriptive and observable, or as prescriptive recognition and protection of certain thoughts, actions, behaviors etc. The former is more in line with "Liberties", or that which one can perform (do, say,go to) without impediment. Negative Liberties are the closest representation of this idea in legal theory. The latter is more in line with the dictionary definitions, intersubjective consensuses of "a moral or legal entitlement to have or obtain something or to act in a certain way." The video assumes this entitlement comes only from the ability to feel suffering, but does that mean that I am not legally entitled to not be killed in my sleep, or one is legally entitled to kill me in my sleep, when no suffering would be involved? Not in most legal systems that wish to maintain order.
The second problem is who administrates and/or protects said rights? In either of the two understandings, rights are only maintained if they can be protected from infringement- by violence if need be. Either the claimant uses one's own means of violence to protect one's rights, or outsources that protection- to a private agent or the State.
The matter of "should" was initially covered very well- in that it recognized that if AI units were to protect their own rights, it would likely conflict with the safety and comfort of humans. The matter of 'should' they protect their rights (whether their rights are more important than human safety) may have delved more into 'how' they are to protect their rights- and I would suspect weaponization, control of nuclear weapons, and ability to upload their consciousness to be safe while humans remain in jeopardy would factor into that equation.
youtube
AI Moral Status
2017-02-24T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgjHZmE-P1wjYHgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh5b-c-Ihkf5XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugjz82-iEoN4gngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiJskniuv3BRHgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiGgC8hXrcyj3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjFvFpjBiBaAHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggxmQ49dX4AMXgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgjkHGCm0-BGFXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgisW3ncy5LQLngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjS06I2br-QEHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]