Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As I said in a comment to your podcast, based on my limited knowledge of the subject, I agree that AI poses ecological, economic, and social dangers to this planet, particularly to humans. Although I wonder if superintelligence is even possible given AI's current architecture. Now, IT specialists like Yampolskiy think only in terms of linear causality, resulting in a narrow vision. Yampolskiy, in particular, endorses the "simulation hypothesis," the idea that we are living in a giant simulation, which is not so different from traditional religion. He asks silly questions (“What would we do with our free time?”) but doesn't consider basic questions that come immediately to our minds, such as: What would happen to the economy if 99% of people were unemployed and had no money? Who would create value, and for whom? Who would sell what to whom? And what about politics? We know that even 20% unemployment leads already to serious social unrest. We might not even get so far, as AI requires an unprecedented amount of rare materials, water, and fossil energy, most of which will be depleted within the next 20 to 30 years. Those are the real dangers of AI, not the fact that we’ll be “dominated”.
youtube AI Governance 2025-10-20T13:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzhiMGdFKYQ7oPVmmV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxut6gwMew2hLh-e4F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw9-N7NSd85KGjhBh54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxb23CH_SzeNOzkD2l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx2KTls30276IvNQcB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxznst4JUty678HtTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw1NjgFx5f-zsnckSR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxovZ6IC-Tnm6kGjOd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyk6VszHkMDN3DWCex4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx_1VC2-KIzflzch3x4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"} ]