Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hank, as a counterpoint to this, pleeeeaaaase read “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity” by Adam Becker. It’s a deeply researched and very grounded view on both the AI safety take (potential future risks) AS WELL AS the AI ethics take (real risks manifesting right now). And, he questions the implied corollary of these extreme future risks of “everyone dying” and this utilitarian morality pushed to its limits (as is espoused by many AI safety folks) which is: if there is an infinitesimally small risk of ALL THE TRILLIONS OF FUTURE HUMANS FOREVER no longer existing, then that tiny risk is quantitatively hugely more significant and outweighs even 100% certain suffering for any group of people existing now - so the existential risk theorists then feel empowered to say that their AI risk is more important than and should get more attention than any other cause on the planet. Including, real war, real famine, real inequality, real suffering. That is their ultimate position. Like, please read this book. His point is not - we should ignore the risks. His point is - we should be balanced in how we view the risks and ask who’s raising them and what their motivations might be.
youtube AI Moral Status 2025-10-30T21:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzHCH_7D3Io1A9ZfUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydK4YU0WvkkXDhLZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyLW75ItQyohqOU8-x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyi3pryPPZ16W5-jrN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyAcSPetC-PdFpwvhx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyCbY8TYZcio_FCw7B4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxqV2VekvkpMAdPBXd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwR5aqfElxaSpKXGOl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOXNQrSMo9rDaxXcJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz35HnxfBiL56aUr4J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]