Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would say: As long as a robot can't say "by itself" that it's an entity - and describe it, there isn't any reason to be aware of that. So we can tell whatever is going on in an ai-brain. As long as there is no "who am i?" or "why am i here?" there isn't any reason to get in trouble. (Even if it's sometimes very hard to identify their real thoughts) Animals on the other side are an other problem. We don't exactly know, whats going on in an animals head. They may think of "who they are" and "why they are here". We don't know. Are we allowed to assume something what may rely on our lack of knowledge? The next question is: whats about humanity if we are able to fully understand the creation of thoughts? "Rights" are bond to the term of "Soul". "Soul" describes just something we don't understand. No one ever found some atom of a "Soul-Element", which can explain it's existence. I think it's an construct for something we can't explain. And as long as we can explain what robots do, they don't have a "soul". The question is: What is happening, if we actually can explain every behavior of human interaction? Do we lose our "soul"? Maybe the reason is: Do we have the right to revoke the right of a soul to any other entity? Do we have to update our limited perspective to ensure we have rights in the future?
youtube AI Moral Status 2018-12-05T04:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwxbGDsqCiSNuQhRfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwVgsUMRlcixxZfw8p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx631C12qaWO8ZV5vN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzT9okUcSlUw_n7VSd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyipJWvs2wfjFQCHOd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz5JhTK_u6mxG2AHjB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwOauXWT3WGLwPcCXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYcPwVex6HZFsb2kl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUSa2V7YjIYKavGqN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxbAV0LZJaLxgRItnl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]