Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Calling AI "art" your own is like commissioning someone to draw something and th…
ytc_UgxzB-NAK…
G
People can barely understand what they read in YT comments.
Who thought that ope…
ytc_UgzQw4Ahl…
G
people say "oh this car killed only one person but normal cars kill way more" Fu…
ytc_UgyRtExH2…
G
Me personally? I think artists will find ways to prevail.
AI is cool, but if yo…
ytc_UgxhoRxO1…
G
@theskiypdee AI are basically complicated calculators. Like I said, no self awar…
ytr_Ugwj362hz…
G
Aaaaaah HANK HANKMAN GREEN!! Can he use AI to draw you a thumb drive or is he a …
ytr_Ugz-_lMNf…
G
ChatGPT is not even AI it is not intelligence it is a word calculator an advance…
ytc_UgxOaYVhd…
G
I remember calling my friends on our landline and I'm pretty sure that I live in…
rdc_gtd4kt2
Comment
Yudkowsky is a commentator and writer, not a real AI researcher. He's certainly no "pioneer of AI safety research." He knows as much about AI as a bright 12 year old sci-fi enthusiast. Yudkowsky is a hysterical doomer when it comes to forthcoming AGI and ASI. He doesn't understand that AI doesn't have will or drives like biological organisms. Animals that had strong drives for sex, hunger, breathing, pain avoidance, pleasure seeking, and social dominance had better survival rates, and therefore were more likely to propagate their genes. Advanced AI systems never had a need for these survival traits, consequently, they have no inherent will or drives. The only "alignment" problem is the one that we've ALWAYS had: alignment between humans and groups of humans. To counter bad humans that control ASI (Artificial Superintelligence) requires good humans with ASI. What's truly dangerous is what Yudkowsky wants: relinquishment of ASI. This is not only a potential disaster for national security, but would severely limit medical advances. NO THANK YOU.
youtube
AI Governance
2025-10-27T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]