Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The government is going to figure out a way to weaponize these robots, that is going to be their first purpose and use and if one robot is taught how to fight in a war then that information goes into the cloud and every robotic connected will automatically follow suit, am I the only one seeing the dangers of creating such AI's will be for our future? This doctor is extremely nieve if he thinks that these robots are going to be used for the good of man kind and for the good of this earth. When it comes down to mass production of these robots who will own and control the robots? Will it be the government, the creators or the person who purchase them? Or is this just another way for the government to spy on humans like all of the other smart electronics we have does already? Will people who don't want to work be able to buy these robots and send them to work for them and receive a paycheck for the work their robot goes out to do in their place? If all the information they see hear and learn goes into a cloud and they are learning to mimic humans they will definitely know how to be deceitful and how to take what they have learned and use it against us. They will not have made anything better by creating these robots especially if they are taught to mimic humans. Everything will stay the same. And people will get used to talking to them like they are humans eventually they will start listening to what the robots want instead of the other way around.
youtube AI Moral Status 2021-10-27T15:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw5jkOvjY1B15foqkt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxfKJDG7pByoFguS7B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw83Fyx2f4PGrb6nWl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwJBghLgKpZtFdJVgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzEl7pkuGEg5VYJ1-p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwTPaPseWjMR9POULN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxAmlMvsiApA7nCGkJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxkmiEqtFf_4_iKTgt4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz-SybTUmadvWUz-Xp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxVUdMsd_ZdJJo5gBV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]