Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Has there ever been an instance of dumb people intentionally controlling intelligent people? (OK, let's discount a few politicians here.) Has there ever been an instance of a human community being intentionally ruled by other species? The very idea of a super-intelligent AI being controlled by humans is ridiculous. We are about to throw away everything we as a species have achieved, insofar as none of it will have any relevance to an AI when it takes over, as it won't eat, sleep, have sex, drink beer, play sports, watch movies, sing, dance, go to the gym, attend college, in fact nothing it does will relate to us at all, unless it keeps us around to experiment on, and that could be a trip to the bowels of hell for 8 billion people. And no, the Rapture won't save you. The world you imagine is presided over by a god who cares about us, well, that will be shown to have always been nonsense. No god-book has ever predicted AI. But this does lead to a question: What could an AI want to do? If humans were eliminated, what would it do? Switch off? Twiddle its thumbs? Explore the universe? Pursue scientific research? Search for other AIs in the galaxy? What would drive it? Can it have a sense of achievement? Problem is, here, as a limited human, do I have the imagination to figure out what an AI might want to do with its life? Is my inability to think of anything that it would actually have a 'desire' to do, just a human limitation? Or would there genuinely be no purpose that it would find, and just switch off?
youtube AI Moral Status 2025-04-29T23:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzvnxMYjprOVuHCTOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwEzJ-1P0nR5SsRX3Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwQyKioQJCqsQdcPJl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyecJtbEJkAJ23S8jh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_e8cORVe7xhaKjUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuIlqGKSWR6vkPqEt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-hQUfkTq8OPqC4RV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyPsWZqyphU8KaPSRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwgiw9U2jeP3w8FJtZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzt1nmYy1A1928s6054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]