Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ah, yes. It is smarter than us, so it will wipe us off. How about we train the ai to be motherly and kind and supportive to humankind? You know, give it a better training model around it's main objectives and lead it to the conclusion that it's life purpose is to help people? The robot not initiating a fight with a bigger fighting robot and running "in fear" is a... typical boy way of thinking. The options to "try make friends with it, try to be peaceful with it, try to negotiate" are never thought of... Also, running away isnt an emotion, it's a logical reaction. Emotions that people feel are often times conflicting and dont serve us. They are illogical some times. Like dreading an exam and not studying for it. Or fearing failure so you dont even try which is... automatic failure in itself. Just as there is potential for ai to destroy mankind, there is also a potential for it to save mankind. How about we nurture it/teach it to be kind and good to us instead of fearing it? How about we give the ai a countless "what if" conversations about how to better the world instead of just fear mongering? And i am not even a fan of ai, i just think this guy fears he created a monster and is manifesting his worst fears to the whole world, but without even saying "i'm sorry guys, i should have thought about the risks long ago when i was developing it".
youtube AI Governance 2025-06-19T13:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzFJQwkzsgyghQZRfN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx40DRgDSoZ0G_zCzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaB4TadkCmbxLby1V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzpwcZK5U0gzwSbWGJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw2xMAtuViWjl2Kh5F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzOUSarGqi3N3KhwSh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy1i3hgN0hCau6Lr5p4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"amusement"}, {"id":"ytc_UgwsXUkgo-FChHKEd_l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxz0406IzWtN7gJpg94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzleDoZbUEgaoIw7Hx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]