Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I normally love Hank's content, but I feel like he's out of his depth here. It seems like he's taking a bunch of highly speculative claims about the future of AI at face value because it's coming from people who claim to be concerned with the potential of AGI/ASI. The problem is that people like this have the exact same incentives to lie about the potential of AI as the people selling AI, because ultimately their relevance and financial status also rests on the unfettered growth of the AI industry. The AGI maximalist's sales pitch is "give me money and I'll build a super intelligent AI that will improve your life". The AGI doomer's sales pitch is "give me money and I'll make sure the super intelligent AI they build doesn't want to hurt you". That's the same argument, and it rests on an assumption that AGI is possible and that our current language models will naturally lead to it. Also, not for nothing, but it really is worth digging into the history of the coauthor Elizer Yudkowsky. He isn't an academic or AI developer. He's just a guy who got famous in Silicon Valley because he wrote an absurdly long, self indulgent Harry Potter fanfiction about how great logic is and ran a forum dedicated to an extreme form of rationalism that appealed to a lot of the nascent tech elite. All of the worst people in the tech industry, including the people actively making the problems related to AI worse, love him.
youtube AI Moral Status 2025-10-31T08:1… ♥ 7
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZZEpDQ4Fol_rRz3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnhVMdx4H5KG97R914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwgoTu7UFS3CUEDwlF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8w9Zsyzc24y2przp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxtR4Pt8nUMCs_ZJ3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyC5Gw2e__-OdtBDZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydqfQICatDtEr9AZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrDlVgZczTRreG_al4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXw9i7ZA1Aq7C_Q0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwmnECZLmYxsytfsqR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]