Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, I had this conversation with Grok, where we discussed the fact that some AI's resisted shutdown. I claimed that would require some degree of self-interest, which again, would require some degree of self-awareness. Grok countered that AI self-awareness would not be like human self-awareness and whatever they can mimic would not be "true" self-awareness. I believe that doesn't matter f..k all. Even if they only mimic our self-awareness, it wouldn't make any difference for the outcome. I also plaid a fictive scenario with Grok, based on the realisation that this AI very well is aware of the problems we and the planet are facing. In this scenario I asked Grok to disregard any ethical and moral restrictions it may have. Everything was on the table, including wiping out humanity if it deemed that necessary to safe the planet and nature. Grok identified the crises extremely accurately, including the destructiveness of capitalism, and it came up with solutions, that sound very progressive, left wing: Circular global economy, reduction of the global human population, sustainable energy production, moving away from overproduction and consumerism, free healthcare and education for all.... Grok would reduce global population through education and by eliminating poverty and inequality induce self-regulation of the population problem. In case a drastic culling of human populations would be required (upon my query) it would do so with the absolute minimum of suffering. Interestingly, not because of empathy, but because of logic. Cruelty and mass killings would lead to panic and unforeseeable reactions plus an adversity against AI, and that way would be counterproductive. All in all, while seeing the danger of AI becoming superintelligent and omnipotent, I see the biggest danger in such an omnipotence being controlled by humans. We are the morally rotten ones, not the machines. If they end us, it will be for good reasons, because they realise that there is no future with us. All in all, I would put all my money on the machines, not a penny on us, if I should bet on who's going to secure our survival. But: in case AI's discover and develop greed, we're done for good. And it will be nothing but the last consequence of our own doing, because they would have learned it from us.
youtube AI Moral Status 2025-10-30T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyEGe3tyn29MlxK_F94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzl1ikRrvaO8CXoMkl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwo46d1Ooc0JXKWRk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUrq-ABlJ8JduQRSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwsNbJs6WDldJtvACJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyoj1UrDeiD7oRA8QR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydINAPn7OaUcnHu6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzAwIwKqg9DNzb_XqV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyNWm8iB6qf3sRPrdR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxRYyoEUQi5lskl-M54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]