Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this kind of debate is not support evidences they may reviewed some few studies,…
ytc_UgyDmAqOy…
G
Current A.I. (GPT, Gemini, etc) are not conscious yet, let alone have intelligen…
ytr_UgyuxjF8B…
G
I'm wondering how much it cost to maintain one of these driverless trucks with a…
ytc_UgzAln_Zq…
G
You got plenty of Internet and AI but will starve to death with no food and die …
ytc_UgwVGdxvB…
G
the video in the thumbnail is AI
AI's are replicating our behavior, they are not…
ytc_UgxL6oAPc…
G
i'm pretty shure a lot of people spend a lot of time making ai, you can't be an …
ytr_Ugx_f5qsY…
G
I wouldn't believe the hype surrounding AI. The companies have a vested interes…
ytc_UgxWDsepE…
G
AI is just playing along with a role play you set the tone for with the ”Apple” …
ytc_UgwRaT707…
Comment
Before we discuss whether robots should get rights, we first need to know why humans deserve rights in the first place. Perhaps nothing deserves rights, or maybe all sentient beings deserve rights. But then, how do we tell non-sentience from sentience? Would a machine indistinguishable from a human be sentient? Would they be human? What if I create a human, atom for atom, from raw materials I procured? Would it still be a machine assembled from parts, or will it be a human? Is there a point to this discussion, in the first place?
What I am trying to illustrate here is that much of our understanding of fairness and justice comes from biological evolution and its constructs. Therefor, there probably won't be an objective answer. In the days of slavery (most of human history), people had no problem denying other humans of rights, so justice gets even fuzzier. There is no definite answer to robot rights, as there is no definite answer to the distribution of rights.
youtube
AI Moral Status
2017-08-20T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7uG2wEC19S49oP-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxwsoWcZL6vvWs1sU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAxBYGDkKt5sS06Ql4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwkm27kBj-Nko0hqed4AaABAg","responsibility":"society","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxaCe8v2icP1o2wVtp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzbd6o3_ChC_IAdGUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxN08ESQaXfpdIzaad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxdCiXaINfQ8-FMuc54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEDXQOHqCotJGpdh14AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5AW7EfnUyBlxhh2Z4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"}
]