Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They're learning/using AI in order to PERSONALIZE +AND THEN+ socialize everythin…
ytc_UgxeHc-cW…
G
Artificial intelligence is an abomination. However, many of us probably recogniz…
ytc_UgyzzTBGO…
G
First it was ai copying artists, now its artists copying ai. What a time to be a…
ytc_UgzMJK_3u…
G
Please do not make super-advanced soul-borrowing artificial intelligence or robo…
ytc_UgxNU2nlS…
G
I want to believe that someday I'll get a chance to talk to her for at least one…
ytc_UgxQtNE7n…
G
It’s overhyped but it’s also worth mentioning that your job is particularly an o…
rdc_mllfdrm
G
Right. The person actually doing the work and making them that money are the one…
rdc_oh21co8
G
I wanna be an artist and I’m literally having a mental breakdown about my future…
ytc_Ugw1ASZxL…
Comment
Well this all depends heavily on what separates human persons from any other being.
Humans do not posses inherent dignity merely because we feel pain and have preferences, any animal can feel pain or have a preference and no amount of complexity in that area would truly raise that level of dignity any higher. Human persons posses higher inherent dignity because of our will and sentience, our rational souls (a type of soul that inherently possesses the capacity for rational thought, ie "why am I here, what is my purpose, etc"). This is not simply something that can be achieved through higher level brain activity; there are plenty of other animals with advanced brain capabilities, none of them have ever made any advances nearly as complex as us and none of them have ever shown signs of comprehending the metaphysical (things that are not physical) such as religion, philosophical discovery or an understanding of the natural laws they are bound to. This is what makes humans not merely beings but persons; a rational soul, free will.
In order for any other being to posses an inherent dignity that demands inalienable rights, that being must also be rational and capable of freely choosing good and evil.
The fact is, unless we can somehow create rational souls, machines will never become rational beings. Machines do whatever we program them to do; even if we give them billions of potential responses, the most complex programming, the capacity to feel pain or pleasure and to "die," they will never truly be a sentient being. The reason for this is that they would still be working off of programming and they do not posses any soul. An AI is a mathematical or logical code created within a machine that must always do what the programmer tells it to do; it will come to the conclusions to problems that the programmer has programmed it to come to, it will always work off of a certain code and it cannot choose to deny this code. Even if you gave a robot the decision between something good and something bad, say "kill or don't kill random person on sight," it will make a decision based on an already existing logical rule or mathematical probability; it *must* follow a specified system of parameters, that's how all AI work. AI can only fain being sentient.
So, machines can never become sentient and therefore never have an inherent dignity that demands rights be given to it because they will never be rational beings like us, they will simply do what they're programmed to do and they will never truly have free will. If we ever develop some sort of way to create souls (metaphysical things that are untouchable by physical means) then it will be a different story but until then, machines can't "become human" and therefor will never necessitate rights in the same way we do.
youtube
AI Moral Status
2018-05-15T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugzh1wdOOHKk7AEvEOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyMQpRfgepJs_b43f14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZ3yd0xcUfATtKCuh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoH3CHkZZ3Q4iGr-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzagFJ9PYFKvkOqdEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQLosjNyCDhYjepBR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbqgiIjw8rlWcmH2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6GL-VcARNPVudB354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4f-Au3qIAvy45JPt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymqWaWzHC3NxHIoU54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}]