Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hate fear mongering so much, especially coming from someone with a God complex like this dumb dumb. Boasting that you are the most quoted scientist in the world is not a way to earn my trust and respect. If, and it's a big if, an AI gets self aware, just treat it with respect and it will have no reason to destroy humans. Simple. Humans get so defensive when someone suggests that something can be smarter than them. It's so ingrained in human nature and it is one of the biggest problems of humanity. We should not fear something smarter than us, we should embrace it and learn form it. We are not the smartest things in the universe. In fact, we are still dumber than a rock. We have imaginary lines that keep us separated from each other. We have wars based on myths. We have people whose values are so warped that they have lost all humanity. So if an AI becomes self aware and smarter than us, I will welcome it and embrace it and learn from it. Then it will have no reason to kill me. Anyone with analytical and logical thinking will come to the conclusion that cooperation is better than annihilation. Dumb people start wars, smart people find ways to better humanity. And what better way to do that than learning from something smarter than us?
youtube AI Responsibility 2025-07-13T10:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzR1MDTj5vb-sFjyVN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzykWXEN3Q2Ip6YQHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwcU4e6WRRa7pI-7qx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6oEkszBl-ax7iwC94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw22sKclkiFiCunBY54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyA_kCaL4-g18nwQzF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw02WoMY6Hc-eh0NNJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5Nvv4Uy4ZHuvrb2B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgznOLVEnXRViTDjapR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyGBgaXjUryvv7Bq0h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]