Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Robot's wouldn't deserve rights, even if they had such thing as a "understanding consciousness" and here's why: Machines are just things programmed to do what they do, because we are able to use some basic principles like transistors to manipulate them. Machines will ALWAYS do what they have been programmed to do. It is like a windmill which spins during a storm: It is designed to spin and thats the only reason it does. It is not intended by the windmill to do so as the windmill doesn't have a free will. On a higher level, we can program "self-learning" AI which can eventually develop something comparable to a understanding consciousness, but as the computer it will be ALWAYS able to retrace why the AI "decided" to do sth. as it can do nothing, which wasn't programmed by its developer. Even if the AI is able to write another AI, you could tell that by retracing its code. On the other hand, all humans and some animals have the potential to not be predestined (as described by Descartes), but its still hard to tell whether we have a free will or not. BUT MACHINES DEFINITELY DONT HAVE A FREE WILL, any kind of "feeling" is interpreted by us, as we tend to interpret humanoid behavior into all kinds of things. Machines don't feel anything, all actions of them are caused by their programming. Some people also believe that we as a human are predestined (as described by Descartes), but as it is not clear we in contrast to robots/AIs are in need for rights. I'm sorry for my bad english, i'm still trying to learn it
youtube AI Moral Status 2018-09-10T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzc_jMUsc_eI6TqcMp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwYjf3cLp9DLBAiHt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzEIn8BSNXgPFr9O_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxbZOFTWlsui99RR114AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFTzgiOdHFs2KsE2F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz82PdulqhlQdsKRl94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgziZQpogkskXMHkTpZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwEB3AAuyHAJznwlH54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZReiLD6rCIPq5n4N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwRCkhUCchpSpmQKql4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"} ]