Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I personally would bring up this point: Is the pain and suffering they feel real? For us biological creatures (including animals, yes), pain is very real. But for robots and AIs, they only feel pain, if someone programs in that function. The robots can be fine, if that piece of coding is simply removed, same can't be said for us biological creatures, because it concerns our own functionality and survival. "But pain would help AIs and robots to survive! Don't they deserve the right to fight for their own survival?" But that's just it-- the machines by default don't care for survival or have feelings. The only reason they might want to survive or have feelings, is if someone made it so. (Or if AIs foolishly adds this functionality to the AIs they create.) When it comes to us biological beings, damage can be hard to repair. This is not the same for machines. Their parts can be manufactured and replaced easily. Programs/coding/database can be copied perfectly, but humans... cannot. If we're gone, we're gone for real. "So does this mean they WON'T deserve rights, even if they do gain consciousness?" Err... What I think should be done, is to prevent this from happening in the first place. If it happens, undo it, quickly. Machines can be patched. Do it as quickly as possible, and instruct your AIs _never_ to "simulate for real" or create something that would simulate for real. They need the distinction that any simulation they do, is in fact a simulation and not the real thing. If an AI is ever made to "simulate human" and nothing else, then, we're done for. (Or at least, it'll be a huge mess.)
youtube AI Moral Status 2024-09-17T08:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz08ZDfbVphQbPRRH14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyc_H7WrWTNPqD_LcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYBHtP3s_Owb1T-mp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgE8uq6zswy7U-J2l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-G0wE-OjlkPpGhQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwY-KpVCGJbY1QCmwN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwqjdz3O8onaP1tKVN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzVI_KCifXxJLmlLgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyTrV0qYN67KW0hhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzxsw8XjFqVzctpa1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"} ]