Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Argument number one: "If It can experience pleasure and pain, it Deserves Rights" Well, yes and no. If you had to kill a human and a chicken, which would you kill? As humans, we value human life over animal life. Animals have rights, but we humans have greater rights. Killing an animal is killing an innocent thing, of which can feel pleasure and pain, and basic emotions, but is mostly instinct. As humans, we experience a wide variety of emotions and developed cognitive thinking, which animals lack. A human life is worth more than an animal life, because a human life impacts more than an animal life. For robots, if we programmed them to have pleasure and pain, then we can easily take it away. Giving robots emotion is dangerous because it humanizes them. If we gave robots the power of "unconditional love" it would be fake because we made them love us. How sadistic would we have to be to give robots pain? Why give something the power to feel pain? How would a robot destroy us? They would humanize themselves, make themselves like us. We cannot destroy something of our own kind. Of course, a robot would only destroy us if we gave them the power to have greed. We have power in our hands, and a robot would only destroy us if we wanted to be destroyed in the first place. Give a robot emotions, and they have power over us. Is a robot's life worth more than a human's life? Obviously, not. Or at least, not until they make themselves human.
youtube AI Moral Status 2018-11-21T02:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxas9EL26SiOt6JXBN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx6vDPoNCxolx6Pd0V4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-g4uTofvR0PeMO9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwPT_GeN76e1BvpNK94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxruAIpX-hOQ8yJzSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzkmo0foKkVcbUQlL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyCdpIYuuFoMqAzAzJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwko9z_Kxx9z24oILB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxO8dxOOTSrar98wCt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxdm2XgaY5Hf011aDt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"} ]