Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's more one single issue, which encapsulates both those things. To simplify, a…
ytc_UgyY0N4XE…
G
Take to consideration those who take delight in war and were not inviting ai int…
ytc_Ugxn-wXhE…
G
POV: Your brother takes your phone off you when your in the middle of a if you k…
ytc_UgxJwofEu…
G
It boggles my mind that racists, criminals, sexists, fascists, and other types o…
ytc_UgwXsS7qE…
G
I agree with you because ai is more useless than us most of the time…
ytc_UgxVdCbNP…
G
This has everything to do with the “Dark Enlightenment.” Curtis Yarvin and all t…
ytc_Ugz75BGtl…
G
Absolutely! Sophia's full-body design really brings her to life and makes intera…
ytr_Ugw9gqxRw…
G
If you insist on blaming the parents for this, _please_ take a look at the court…
ytc_UgwTQdYPa…
Comment
Resuming from this video and the WestWorld's linked one, that's my reply:
What is knowledge? What steps are needed to achieve in Artificial intelligence?
1) Describe the world (using math, as it's currently done, like "class" in programming and vectors/sets in math)
2) find a way to evaluate all of the described things, maybe through pre-fixed values, as like as human's preferences
3) find a way to calculate the consequences of actions and choices, using the point 1) (as it's partially done in A.I. and Game Theory)
4) using the point 2), give an evaluation of the positivity or negativity of an action or a choice produced in point 3), maybe considering the different "points of view", expressed through point 1)
5) using all four previous points, create a way to choose which action or choice take, like a "free will"
So, is this ability to perform all of those calculations enough to deserve rights?
No, animals have rights even if some of them have no self-consciousness or ability to perform rational thoughts.
There's another component: emotions. A particular "emotion" is "suffering" (see the WestWorld's video).
It's considered as "good" what reduce pain and suffering or provide pleasure and relax, in a similar way it's considered "bad" what causes pain, fear, disgust, sadness.
So, a particular kind of human being very clever but unable to feel any emotion (as like as some "bald, elegant and clever man from future" described in the TV series "Fringe") does not deserve rights because they are comparable to a calculator, a personal computer, a mobile phone that I could smash to the ground without any legal or ethic consequences because "it's just a 'mechanical' tool, a biological calculator", even if "it"'s able to have a self-consciousness and to have an internal representation of himself as we humans have (I mean, "it", as like as all "normal" humans, knows what humans are and "it" knows that "it" has "this" knowledge)?
No, "It"'s still a living being, differing from a "right-less" bacteria because "it" can be rational, have self-consciousness and "it"'s able to think, even if "it" has no emotions and feelings.
So, robots deserve rights?
IMHO, yes if and only if they can decide a set of "life purposes" (based on a kind of intelligence and/or feelings) that can be different from those issued by their creators at creation time.
For instance, a human male cannot become pregnant and then mother because his DNA forbids it, but can become a singer even if he started playing soccer or painting.
Do You agree? :D
youtube
AI Moral Status
2019-09-20T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwoe0cpgXSno8WS_Qp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzthbSXLro3Qh8qYxp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwaDry6a7_B7HB9pMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwj_Add4YYiechoYNh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8JN8D5eEezc0NO3Z4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx_4cWwNX0T0GIMmR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0N2oolttFDyKjcnJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzyhbMV1UsxkXFuSk14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2NMtB7ztDq4tiltF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx_28QwUyHyAY7kPml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]