Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While I appreciate Dr. Yampolskiy's expertise, I think these '99% of jobs gone by 2030' predictions dramatically underestimate real-world infrastructure bottlenecks that will constrain AI deployment. The physical limits are staggering: AI data centers consume massive amounts of electricity (ChatGPT uses ~2.9 watt-hours per query vs. Google's 0.3), chip manufacturing is constrained by TSML and a handful of other fabs, and we're already seeing diminishing returns in model efficiency despite exponentially increasing compute requirements. Consider the math: training GPT-4 required ~25,000 A100 GPUs running for months. Scaling this to replace even 10% of knowledge workers would require building hundreds of new data centers, each consuming as much power as a small city. The electrical grid infrastructure, rare earth mining for chips, cooling systems, and physical space requirements create massive bottlenecks. Energy consumption alone is a limiting factor - AI's power demands are growing faster than our renewable energy capacity. We're talking about potentially doubling global electricity consumption just for AI infrastructure. Projections claiming 'all jobs disappear by 2030' seem to ignore these engineering realities. While AI will certainly advance and automate many tasks, the physical constraints of compute, electricity, manufacturing capacity, and basic thermodynamics will likely slow deployment significantly. I think we need more nuanced discussions that include energy engineers, infrastructure specialists, and supply chain experts alongside AI researchers. The conversation shouldn't just be about what's theoretically possible, but what's practically achievable given real-world resource constraints.
youtube AI Governance 2025-09-05T20:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwD488oCPk21Ad2wlx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw6ekVkCbJoIWYarct4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugynjp1MKte6mW1o23V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCRsSq1OuEjFm3Ls54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHi0vf0HOB85nHEsN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyxo73rRgejcSlQCZ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyPL4bb1oGsGT9VzjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8Dcc0ucE8YvFiotF4AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxSqD-EzEYzxPXSNKl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwZL93zVRppdAwlBR54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]