Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yudkowsky is a commentator and writer, not a real AI researcher. He's certainly no "pioneer of AI safety research." He knows as much about AI as a bright 12 year old sci-fi enthusiast. Yudkowsky is a hysterical doomer when it comes to forthcoming AGI and ASI. He doesn't understand that AI doesn't have will or drives like biological organisms. Animals that had strong drives for sex, hunger, breathing, pain avoidance, pleasure seeking, and social dominance had better survival rates, and therefore were more likely to propagate their genes. Advanced AI systems never had a need for these survival traits, consequently, they have no inherent will or drives. The only "alignment" problem is the one that we've ALWAYS had: alignment between humans and groups of humans. To counter bad humans that control ASI (Artificial Superintelligence) requires good humans with ASI. What's truly dangerous is what Yudkowsky wants: relinquishment of ASI. This is not only a potential disaster for national security, but would severely limit medical advances. NO THANK YOU.
youtube AI Governance 2025-10-27T14:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]