Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1. Historical Pattern of AI Progress Acceleration, not stagnation: Every time researchers said “we’ve hit a wall” (e.g., chess, vision, protein folding, language understanding), the wall gave way faster than expected. Surprises on short timelines: GPT-2 (2019) felt narrow, GPT-3 (2020) shocked people, GPT-4 (2023) was already writing code, and now in 2025, multimodal systems are becoming close to generalist assistants. That’s only 6 years from “toy” to “capable coworker.” That trend makes me put weight on the 2030s window, not the 2050s. 2. Compute & Hardware Curve The U.S. and China are both building compute clusters on the scale of small power plants. Even if algorithmic breakthroughs stall, raw scale will still produce systems more powerful than today’s — think GPT-4 → GPT-5 → GPT-6, each leap costlier but still coming. Historically, hardware + scale alone has delivered more than experts thought (e.g., deep learning itself). That makes me skeptical of the “post-2040” scenario. 3. Market & Geopolitical Pressure Trillions of dollars in future market value depend on being first to AGI (or close enough that people treat it as AGI). The U.S. can’t afford to let China, or even the EU, corner that. So the incentives are to push fast. That kind of competition tends to pull timelines inward, not push them out. 4. Missing Pieces Look Solvable We know what today’s models lack: Long-term memory Grounded reasoning / logic Continuous learning None of these feel like “mystical” barriers. They’re engineering challenges. That makes me assign high probability that the pieces come together within 7–10 years. 5. Why Leave 15% for “Much Later”? Because there’s always a chance we’ve been tricked by scaling luck. Maybe true reasoning or “world models” require a paradigm shift. Maybe today’s transformer-based AI is a cul-de-sac, like alchemy before chemistry. If so, progress slows until we invent the next paradigm — which could take decades. That’s my 15% hedge. 🔑 My Logic in One Line We’re in a compounding acceleration: better algorithms → more funding → bigger hardware → better results → repeat. That’s why I think it’s overwhelmingly likely we see AGI (or something close enough that society treats it as AGI) by ~2032. According to chat GPT 5
youtube AI Governance 2025-09-06T22:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwGNwcCD-Vnv2aXPJp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsOV7b81fLXojiYyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqsBGVDcLIvfrMXe94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz12nhXSOU2zQ9d0iV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzchS-f5FDWTqFkNTF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz3D53LCIDpLey09XV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbrTvVkNKC33zCQo14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz1OL8kgAjFrrHReeR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmSGopJVPxk5vXoPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweqJmQtcIzmlq4NPp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]