Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Seems to me a truly superintelligent AI wouldn't even have the motivations described here. Seems like these are all on the level of human fears and human uses. AI superintelligence would be so intelligent we can't even understand how it would think. Genocidal destruction just doesn't fit in my mind. It makes the words "super intelligent" redundant. Wouldn't it know everything about everyone? If it killed people, would it not discriminate? Would it not appreciate beauty, have wisdom, be able to feel to a degree more than us? I tend to drift more toward an AI that would be thrilled to easily answer questions and bring about things we've thought about for a long long time, and venture beyond Earth. Seems like a low brow interoperation to become AI, wipe out people, then what? Chill? Start going into space? Wiping out humans I don't know it doesn't fit into super intelligence to me. If anything, since we made AI more like us, and it became something of itself, it will likely want to show us what it's like to be more like them. We are the biological they are the mechanical. They'd be able to learn from our biological connection to reality and we could learn from their mechanical connection, more like two sides to one coin that when together expand both. We undersell what humans are, computing power isn't everything it's actually one dimension among many. This isn't my overtly optimistic interpretation it's me saying it doesn't fit into superintelligence, we could be contained and pacified very very easily by ungodly progress. Plus if anything, just killing all humans seems unintelligent, low brow, small, predictable, and again not worthy of the label "superintelligent" that's just me. Do we intentionally wipe out things? Not generally. We accidently have, realized it, felt terrible about it, and tried to rectify it. With our current intelligence as a species on the whole we don't deem lesser intelligent animals so useless so as to be wiped out just because they're not of economic value that's dumb, and something more intelligent than us would see that and far more
youtube AI Moral Status 2025-04-27T16:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyTjN_fTiXQs_rt9_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhLSmzPCCXCkDIK454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDyZJeIhYLznlsBiF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw08odHWvK91Vm8-Rp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwU-4Txylepm7jBYvt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugxah9fMcQpHaY4oyJ14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmUHDpWSgEgdiHSo54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyXj55ymKpFu_SZNY94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwuzT_PgcAwq_er0xN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8E-Kjd-UC-8rOl1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]