Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@LouisRossmann @4:09 "If it works, when there's no license plate installed in th…
ytc_UgxL-pAQw…
G
I'm gathering scholar articles of how AI affect users mental health. Please let …
ytc_UgzfdeRJT…
G
I hope they regulate it so that large language models can no longer be advertise…
ytr_Ugy7hg7Oy…
G
The fact that an entire truck is made with the whole point of having no one in i…
ytc_UgwOh4Szf…
G
I think the safest thing would be to always speed when riding a motorcycle. If y…
ytc_Ugx5uk9O6…
G
The real risk isn’t AI turning on us. It’s us trusting it enough to stop questio…
ytc_UgytoNt93…
G
AI is basically not using real human creativity without permission. AI creators …
ytc_UgxMGfNvg…
G
No cause of course anyone's gunna jump straight to the defense of their race. Wh…
ytr_UgymTddYV…
Comment
6:03 "[critics say] the paper fails to detail how the AI agents are able to make such huge leaps in intelligence." This seems to be probably a mistaken summary on the part of the journalist, or else a false critique. The paper actually does explain it in detail, on their "timelines forecast" page, along with their reasoning; and their reasoning seems valid to me insofar as I've read it. Of course, they don't explain how the AI would work inside, but you don't need that information in order to predict that the AIs will likely continue to get more capable along a certain trajectory, if you extrapolate trends.
As for the self-driving-cars critique, it seems invalid to me, because as far as I know, the people who were predicting self-criving cars so soon were not expert forecasters and have a poor track record of predicting things like that, and were largely business leaders, who have an incentive to make biased, overhyped predictions. With AI 2027 on the other hand, some of the people who wrote it are actually top-ranked forecasters, and none of them are business people or have a financial stake in AI's success, as far as I know.
Getting back to the part I quoted: I would say a more accurate summary of what critics like Gary Marcus are saying, if that's who was being referred to, is "for deep reasons about how I think intelligence works, I argue that superintelligence will almost certainly take way longer to build, and current trends therefore will not continue; although what happens in the story is otherwise somewhat plausible".
I haven't looked deeply into his arguments, but I currently highly doubt he's right.
(I'm not an author of the paper.)
youtube
AI Governance
2025-08-03T00:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzWyJVHby-USFhrti54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugys4XcNaVGQZiCmcD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyjlx39jEMxKYOLI_54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-FLVBhG9WVSuIqOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZ-JaOEr2fEdkGvwR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxmpOCsBSEF4cmpARh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx1ilW0cEuY72LvmJx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwYrQTpiwWsR3Gegdt4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5Y34h3mn_L0FC6OB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwSW4PPZPqz8jZ6XVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]