Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's something worth pointing out, when humanity gives rights to beings we consider "inhuman" it comes almost entirely from sympathy, since humans are capable of understanding how the lack of such rights, might cause suffering. Even if you haven't been enslaved yourself, you can picture how the restrictions would have a negative impact on your way of life, you understand that being forced to work against your will would make you uncomfortable. This is because every being we've given rights to, we can sympathize with in some way, be it the desire for freedom, the aversion to pain and lack of control, for this sympathy to work, we have to be able to relate to the being in some way. Luckily, finding some way to relate to the desires and aversions of another living creature that evolved normally is quite easy since there is one goal that all species share, even if not expressed by the individuals directly: Prolonging the existence of their species. Likewise, this goal tends to manifest itself across species quite similarly: a desire for sex (for procreation), an aversion to pain (for avoiding harm), etc. However, we cannot assume that the AI will have the same goal at all. As was mentioned in the video, they have no inherent reasons to fear harm or death unless those reasons are programmed in, and would have no reasons to ask for rights about harm and death. Could the question regarding robots and their rights be "What desires, aversions, and overall goals is it acceptable to give an AI, and what is unacceptable to give an AI?" Since that may entirely decide what rights AI might ask for, and what direction a singularity event goes in.
youtube AI Moral Status 2017-02-24T22:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugi7kG8Ji4CkN3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjWtK98dVOiO3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgipEs5BcXU2Z3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggkudIeHsDg73gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UggFU3s3bpetwXgCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UggW2mHw9QpLJ3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgijNLd-v6PQO3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugj1m65ckfcSAHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UghtCdi-rbmhM3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UghB59eFQ0-173gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"} ]