Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People keep fearing that super intelligent AI will kill us all but that is silly imo. A superhuman AI wouldn’t kill humans for the same reason your phone doesn’t kill you. It has no instincts, it has no desires, it has no feelings, it has no evolutionary drive, it has no need to dominate or survive If it becomes dangerous, it will be because humans designed or misused it — not because it spontaneously decided to become a killer god. What we have to worry about are a small group of evil psychopath billionaires like Alex Karp and the Ellison family or death worshiping governments led by scum like Lindsey Graham or Randy Fine having control over any super intelligent AI. By their words and actions they will happily bring digital authoritarianism, automated censorship, automated propaganda, AI-driven surveillance states, robot police forces, economic dependency and control, a permanent underclass stripped of agency. It is not out of the realm of possibility that they could decide that the population has no value. That we are useless eaters wasting their resources and use AI to mass murder everyone. I believe that outcome is more probable than AI killing us all
youtube AI Jobs 2025-11-18T22:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxX9jgTvkXKETQUjYZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzPBSBWNQOkuHWoiQN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyrghViDz2HMe16XPV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFe25RtQxiavAfO8B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_56a6Z7uQOOfohYh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy_k3y9kgA8jMuhXHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyqsPcfqgcSzunUNcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylKpUYKOj1T_WC90J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwz16IvIYegE814Syx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz9bKBr5MRVzsfACrd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]