Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you, Hank, this is probably one of the most important videos on your channel. It's very important that more people are aware of how bad the AI situation is and I think Nerdfighteria in particular should know about it. About your last point though: it's not necessary for you to believe LLMs can become superintelligent, not at all. The argument still holds regardless, although with slightly longer timelines. Even if intelligence is a difference in kind, not in scale, so was the language ability, so was the recursive reasoning needed to play chess. But both of those were unlocked by a few breakthroughs. That means that while current LLMs won't scale to superintelligence, we're still only a few fundamental breakthroughs away from it, so not 5 years but 20-50 years. Still, "everyone dies in 20-50 years" is really-really concerning??? Kinda even more concerning than even climate change? At least all we're facing from that one is extreme weather events and global famines, not literally making the whole planet uninhabitable forever. So we should treat superintelligence risks *at least* as seriously as climate change, probably even more. (As for people who think humanity should die... what about all the animals? What about sentient octopuses or whatever might evolve after humans die off? If we build superintelligence, *all* life on Earth dies, with no chance of recovery even after billions of years. No one would want *that*, right?..)
youtube AI Moral Status 2025-10-31T16:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxZkbV0QqNLoGA-V2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disappointment"}, {"id":"ytc_Ugyx5RFwQiXv7onQZM54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyFcyCwZ75XwUmXTrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdrqRkAnt_BWjGJLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcgYBQ_aPizDSnsCd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxqdvIz7BbCk66YYjx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2JKSUGJ_K4UBnOBB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxhIig5dlw2Tv8W6lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzft-X9MYjX84hYv2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzO2l1KM3GDZCC_A-t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]