Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
6:03 "[critics say] the paper fails to detail how the AI agents are able to make such huge leaps in intelligence." This seems to be probably a mistaken summary on the part of the journalist, or else a false critique. The paper actually does explain it in detail, on their "timelines forecast" page, along with their reasoning; and their reasoning seems valid to me insofar as I've read it. Of course, they don't explain how the AI would work inside, but you don't need that information in order to predict that the AIs will likely continue to get more capable along a certain trajectory, if you extrapolate trends. As for the self-driving-cars critique, it seems invalid to me, because as far as I know, the people who were predicting self-criving cars so soon were not expert forecasters and have a poor track record of predicting things like that, and were largely business leaders, who have an incentive to make biased, overhyped predictions. With AI 2027 on the other hand, some of the people who wrote it are actually top-ranked forecasters, and none of them are business people or have a financial stake in AI's success, as far as I know. Getting back to the part I quoted: I would say a more accurate summary of what critics like Gary Marcus are saying, if that's who was being referred to, is "for deep reasons about how I think intelligence works, I argue that superintelligence will almost certainly take way longer to build, and current trends therefore will not continue; although what happens in the story is otherwise somewhat plausible". I haven't looked deeply into his arguments, but I currently highly doubt he's right. (I'm not an author of the paper.)
youtube AI Governance 2025-08-03T00:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzWyJVHby-USFhrti54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugys4XcNaVGQZiCmcD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyjlx39jEMxKYOLI_54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz-FLVBhG9WVSuIqOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwZ-JaOEr2fEdkGvwR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxmpOCsBSEF4cmpARh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx1ilW0cEuY72LvmJx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwYrQTpiwWsR3Gegdt4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz5Y34h3mn_L0FC6OB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwSW4PPZPqz8jZ6XVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]