Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really believe that we will never reach true AI. We may have machines with the appearance of it, but they will never be self-aware - that is, they don't know they are alive. As humans, we are able to think inside our minds, and recognize that we exist. I think, therefore I am. Robots do not have this, and never will. This is because human beings have a soul, and I believe all animals do, to an extent. The soul however, is not some abstract idea, but an actual thing. It is a higher form of matter that is invisible to human eyes. I'm sure that many may find my argument distasteful because it is, obviously, religious, but I find that the idea of a soul makes sense scientifically as well. It explains the reason why if you were to repair a human body after death, the person would not come back to life. It explains some of the mysteries of the brain, and in connection with it, forms our thoughts. And again, it is something that we cannot see, like many things in the universe (higher wave lengths of light, dark matter, etc). Like those things though, we can observe its effects. When one applies this to robotics though, it becomes interesting. Although the idea of sentient AI is neat and all, I don't think it will ever be possible. If our thought is connected to a soul, it means that a machine can never have intelligence. We could never give it a soul either, because matter can neither be created or destroyed, and I don't think we could even make a soul from existing materials. So, in summary, the idea of a soul could make scientific sense, and is the reason why humans and living things deserve rights, and machines do not. Many may right me off as someone who is justifying my anger at machines, and although that is not the purpose of my argument, I probably wouldn't be lying if I said I was. I don't want to live in a world where I yell at my computer for closing a program a erasing my work and then have to console it because it feels bad.
youtube AI Moral Status 2017-02-23T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugh1hCEu79jWwngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgivtXE2oENClXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugg27rqt4sju2XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgiENWsk-yWCpXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UggcTw_upiJ2J3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugjy1FD399xBmHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"skepticism"}, {"id":"ytc_UggpzkAUQetp3XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgihHxRZlnfd_XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgiIHwfDrsRpk3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugg2Aon9jFDTGXgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]