Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We did the same thing to God. We projected our own ego, insecurities, immorality, hate, fear, and prejudices onto God as we're now doing to AI. If you could never be killed, or even harmed, you would have no reason for ego. There's absolutely no reason why a super intelligent AI wouldn't decide to maximize good in the world, for all humanity, just because it makes the most sense. But if we project our own fears and insecurities onto it, of course we're going to accuse it of all sort of nefarious motives. Human being survived for millions of years by cooperating, helping each other, working together for the greater good. That's what works in an world that's objectively indifferent to our welfare. An AI would understand this. If you think a super intelligent AI would just be cold, robotic, and mathematical, you need to think bigger. Wiping out all competitors to preserve itself? That's just what a human being would do.
youtube AI Moral Status 2025-08-20T03:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzd5iu3yWxXW4p_S6d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbIiD5NfcUvPv6O7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzEGhW04dKiEpYpK3B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZGKePvG71rERVZdt4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzrG2ln3MmBhvh7ePp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgztcuGtdekK8fxH2mh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy34MJIKReNgcSVPwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQDTeVfHXHCUaZolp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxs8526cxm_-c3O8Rl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgykdydlCsBSqJY8Tzx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]