Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm admittedly only 40 minutes in so far, but to me the main issue is that Yudkowsky is making an argument by analogy to other systems (and then essentially saying "Now imagine that times a million"), and Ezra is saying, "Okay, fine, but how are you imagining this will actually happen in the specific case of AI?" and I think Yudkowsky hasn't done a good job, at least here, of illustrating that he has a theory of the case on how this actually plays out. That's not to say he doesn't have one, he might, but Ezra's primary goal with this conversation is clearly to understand whether Yudkowsky's alarm is born from his knowing information that Ezra isn't privy to or having thought through some argument that Ezra hasn't considered, or whether it is a little disproportionate and irrational. Given that, not having a theory of the case makes his argument fairly unconvincing. Now, in Yudkowsky's defense, being asked to predict exactly how a technology we've never experienced before brings about an event that's never happened before is a tough brief, and maybe argument by analogy is actually the best you can do, but I think he could be a little more intellectually rigorous and honest about communicating that. I think the "AI in Context" YouTube channel's "We're Not Ready for Superintelligence" video does a much better job of communicating the kind of argument Ezra is clearly looking for than Yudkowsky has done so far in this conversation.
youtube AI Governance 2025-10-15T22:4… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzS2oTeKUjkY3gUReB4AaABAg.AOInXuezjhDAOJNQrNspj9","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugzof8gms3CEewnikQx4AaABAg.AOImdY8bJNqAOJNwdgKkmz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugzof8gms3CEewnikQx4AaABAg.AOImdY8bJNqAOJQSUyiZQM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugzof8gms3CEewnikQx4AaABAg.AOImdY8bJNqAOJSQzZ781-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugzl3OaI9Eh4nLbYz7J4AaABAg.AOImH03Cw8eAOJiLmbTXrR","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw4fegkhpEZ3ufwAPJ4AaABAg.AOIi5wSPIm2AOJ-oAbTVDr","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugw4fegkhpEZ3ufwAPJ4AaABAg.AOIi5wSPIm2AOLAUhf7jsT","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwwrc0koYUHVT_Zv414AaABAg.AOIgrwSqAUZAOL6X4BRJ6x","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwzM1T9Z4ORS-gHOc54AaABAg.AOIgfAlhL3gAOJ3m7s-sJO","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgySnjkGV4TD_4SA1RV4AaABAg.AOIg-5q8_vuAOJL1_4DEHp","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]