Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Having feelings is necessary in understanding. Just like having a body is necessary in order to move about space. The idea that feelings are a handicap and would be useless and inefficient for an artificial intelligent being to have is absolutely ridiculous. Feelings can have a downside, unwillingness and hesitation are protective measures of code built into humans. Fine tuning your emotions is like programming yourself to be the best you you can be. When you tell someone to grow up and stop acting childish, you're asking them to reprogram themselves for adult life. Pain and unhappiness can be avoided, same with robots. The upside to human emotions/feelings are sensitivity to pain as a protective measure, in robots it's really only higher tolerances. What would a counterpart to human pain receptors be in a robot, sensors. In humans, pain triggers muscle contractions faster than the signal going to the brain than the human moving the muscles themselves. An involuntary response can already be programmed into a robot with no conscience. So we shouldn't program AI to say "ow" Rather, to properly judge a potentially damaging thing to the robot, something similar could be put into place. Adrenaline is a great example of a human response that is generally thought of as a great evolutionary advantage, to put gears into overdrive for only limited amounts of time, allowing for more rapid responses to a stimuli. The body may get damaged or even block out pain temporarily, stresses that can be repaired over time. A robot could also have that, the ability to stay powered up and charged using only enough power to do necessary task, using extra storages of energy only in certain situations. Staying interesting and up to date, needing mechanical and software upgrades to prevent obsolescence and getting thrown out is to robots what death is to us. Robots with AI could transfer their components to form a new robot. The question is if consciences forms, is it programmed to form or does it just happen. Once formed, can it be transferred or uploaded like a computer program or photos? If you can transfer it, can you copy it? An AI's goal in conscience life is not efficiency, just like in ours it's not. Another common belief is that once sentient, a robot will look at all robots as the same. Can an AI have a pet? If it does is the pet on the same hierarchy or is it similar to that of a human. Finally, animals enjoys being themselves, they don't attempt to be human, they may enjoy or fear our presence. Early children imitate adult human behavior, like a robot being programmed they seek to learn, adapt, imitate and hopefully one day surpass their parents or guardians. Adult humans make look at a robot and say it's efficient, but we don't seek to be like a robot. AI conscientiousness would likely feel the same way. It's simply easier to craft a robot with no AI in it like a tool than to breed a human with no conscientiousness. AI will likely feel the same way, we don't have a good counterpart like they will to compare to. The closest we can come to that, not including work animals, are brainwashed, vegetative state individuals and those born without brains.
youtube AI Moral Status 2017-03-03T11:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgielHUcDdA5OXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj4NrahXa96EXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UghzYjspMwjFFngCoAEC","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UggLmtcPeIQoAXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugg2fQsxiLdlPHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgiquswRuL9IPHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugg2mLhYUFQPBHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugiu8TROy3RaxHgCoAEC","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UggnCeYK8ev_UHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjERjmDByRaSXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]