Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can't help but notice we analyze possibilities like superintelligence and turning lead into gold assuming that the resource limitations would be feasibly overcome (which is a big stretch of an assumption imo) The absurdity of alchemy and turning lead into gold isn't just the actual act of turning lead into gold, it's about profiting from the act of turning lead into gold, which even centuries later, we are nowhere near achieving that. It takes so much energy just to turn a miniscule amount of lead into gold. Similarly when it comes to achieving superintelligence or even AGI, we may never achieve it simply because it would likely cost too much energy and resources. We are currently using the equivalent of a city in electricity consumption just to train AI models that help us do relatively basic stuff like writing emails and making slop videos. To train one AI model to achieve superintelligence, it could very likely consume the planet itself. Not to mention there are currently multiple models from multiple companies in multiple countries working towards this goal. I am not at all involved in the AI field so I'm sure someone would have likely talked about this in much clearer terms. But I think spending time talking about AI models achieving superintelligence (while interesting) is a distraction from the real problem. The forces of capital will always move their resources to the salesman with the best sales pitch and we are all forced to participate in this Sisyphean task as test subjects while capital finds new way to exploit us with the existing AI technology just for the ultimate goal of accumulating more capital. I guess what I'm trying to say is, humanity will likely perish from a million different other problems we create just by existing under an unsustainable economic system well before superintelligence ever becomes a problem. Yes, I'm talking CaPiTAliSm babeyyyy
youtube AI Moral Status 2025-11-11T04:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyD_vVgK4lU66Lr9q54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzC5ci0oXYUvBqFe1B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZQjSzkiOzmnrTb454AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgziVby8mv9JCe3Ii9R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz5vty5u3LBNGmPlqh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTgAPXXot1H7fSba14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugz9aRh5H-dWDzkCLvV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugy-YPCOCebMWJ9NcuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy86aQ-y1DSo4yqC294AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx_cFH_A9RtIjRcBJJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"} ]