Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Question: how would an AI, no matter how intelligent, actually end the world? I’m not very knowledgeable on the technology so maybe I’m just being dumb, but there’s a fairly large gap between an AI concluding that an existentially harmful course of action would help it perform its task better, and that AI having the resources and freedom to actually perform that course of action. When people talk about the probability of doom, what’s the mechanism by which that doom could happen? Is it that a AI could become so good at manipulating people, that it would basically be able to do whatever it wants? Is it that someone might choose to give it unregulated agency for some reason? Or is the main concern that we have no idea how it would occur, and thus wouldn’t notice the warning signs? I’m obviously missing something, considering that basically every expert in the field thinks that AI has a significant chance to pose an existential threat. I’m just asking because the details of that threat aren’t something discussions of AI tend to get into.
youtube AI Moral Status 2025-10-31T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjyMyY_O4NgeZAJjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxmTHu1yRq14lt75Dd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNWZ7nhSCvpXJJsDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxT7RhFToA3B5KS5el4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBfFsr_6_n16hJfed4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxm7-V2cw080X9sQZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyCiYAK2ms5Q0A5qhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxBMJKI2GG-3mj8Qi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyKp43VLPuelxIF9Kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKjYNeaaZSElY40Qx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]