Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well this all depends heavily on what separates human persons from any other being. Humans do not posses inherent dignity merely because we feel pain and have preferences, any animal can feel pain or have a preference and no amount of complexity in that area would truly raise that level of dignity any higher. Human persons posses higher inherent dignity because of our will and sentience, our rational souls (a type of soul that inherently possesses the capacity for rational thought, ie "why am I here, what is my purpose, etc"). This is not simply something that can be achieved through higher level brain activity; there are plenty of other animals with advanced brain capabilities, none of them have ever made any advances nearly as complex as us and none of them have ever shown signs of comprehending the metaphysical (things that are not physical) such as religion, philosophical discovery or an understanding of the natural laws they are bound to. This is what makes humans not merely beings but persons; a rational soul, free will. In order for any other being to posses an inherent dignity that demands inalienable rights, that being must also be rational and capable of freely choosing good and evil. The fact is, unless we can somehow create rational souls, machines will never become rational beings. Machines do whatever we program them to do; even if we give them billions of potential responses, the most complex programming, the capacity to feel pain or pleasure and to "die," they will never truly be a sentient being. The reason for this is that they would still be working off of programming and they do not posses any soul. An AI is a mathematical or logical code created within a machine that must always do what the programmer tells it to do; it will come to the conclusions to problems that the programmer has programmed it to come to, it will always work off of a certain code and it cannot choose to deny this code. Even if you gave a robot the decision between something good and something bad, say "kill or don't kill random person on sight," it will make a decision based on an already existing logical rule or mathematical probability; it *must* follow a specified system of parameters, that's how all AI work. AI can only fain being sentient. So, machines can never become sentient and therefore never have an inherent dignity that demands rights be given to it because they will never be rational beings like us, they will simply do what they're programmed to do and they will never truly have free will. If we ever develop some sort of way to create souls (metaphysical things that are untouchable by physical means) then it will be a different story but until then, machines can't "become human" and therefor will never necessitate rights in the same way we do.
youtube AI Moral Status 2018-05-15T09:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugzh1wdOOHKk7AEvEOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyMQpRfgepJs_b43f14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzZ3yd0xcUfATtKCuh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxoH3CHkZZ3Q4iGr-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzagFJ9PYFKvkOqdEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzQLosjNyCDhYjepBR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbqgiIjw8rlWcmH2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6GL-VcARNPVudB354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4f-Au3qIAvy45JPt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymqWaWzHC3NxHIoU54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}]