Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Regardless of if the robots are preprogrammed or using AI to create their own responses, if people freak out when someone says the word bomb (and can be punished for it), who in their right mind would allow robots to say they will take over humanity? On the other hand, with humans being very greedy and selfish, it would take a “miracle” for a human invention to get out of hand and become their own masters, above their creators. Therefore, if humans become obsolete to machines, their creators would have to know or accept it. To counter my counter, Albert Einstein helped create the atom bomb, which became worse than he imagined. If that can happen to him, these robots could get out of control as well. Either way, the smart thing to do would be to not meddle with this, and abandon plans to play “god”. What is the real benefit to these robots? And does the potential for good outweigh all negative consequences whether realistic or hypothetical (such as them being able to take over the world)?
youtube AI Moral Status 2019-12-17T20:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyeR4KLvvM7LUtxu5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzt7F2BF6licCQZmOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzxSwIP1ugOvjg8S3Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxP_TowzMXp57wlVLV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugwxj53osEh2ZnEsavR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyRwE_c7n2tFGi-xYx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwKul_WXzsu1mv9n2B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxUAEohJLGN2PwoAAh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxXXFna14UVlIUVESl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgyGL0tm7EOWG2YqKZR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]