Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A fully aware robot is another species, and should be completely treated as one. We have rough protocols for what to do if we ever meet a new species, and its primarily based on the level the species is at compared to our own. If they are less advanced then us, we observe but do not interact. This is to avoid potential genocide or dangerous cultural integration, something we failed to do to the Native Americans in the new world. Of course the AI will have limits, ones that can not match the complete power of the entire Human population. This makes them less advanced then us. The problem that arises next is tough, because are rough protocols no longer apply. They are naturally a creation of us, and so a direct image of us. We can not avoid interaction, we are forced to embrace it. Yet if we interact with it by allowing it into the internet or some place similar, it wont be long until it truly surpasses us. We accidentally grant it so much rights it can destroy us with a thought, or so few that it becomes inhumane just forcing it to exist. By making the step of producing an AI, we make a step in evolution, and leave humanity behind. There is no right answer, because our instincts tell us to make AI, and also tell us to destroy AI before it destroys us. It is the greatest paradox there is, and the answer is the answer to life, and will tell us more about are past and our future then any problem we can solve.
youtube AI Moral Status 2021-03-19T23:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxsMF7gHjK3qAe0zHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx57xXCi7G6Tls6YWJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyR5qQOvSQA-NTOowN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugxw8StzeSLoStQR1M94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzoQPjd4S86Yxge-1N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6Md4b-IXOcksjYRt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyxmys-7RfiOjkkDAZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugyia_exCG29isaYiuJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzvKIDAtBjCZfdNnl14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcwkaOfmNA5P2SnI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]