Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The good news if you're worried about A.I. (and you probably should be, just due to the way it promises to concentrate even more power in the hands of a few people) is that on the hardware side, scaling on current technology is slowing down. The industry saw the end of Dennard scaling back in 2004, so ever smaller chips now leak more current with each generation. It saw the end of transistor cost scaling around 2010, which puts upward pressure on chip prices as transistor counts continue to rise. SRAM scaling has been basically flat since 2020, though logic is still shrinking. There are increasing challenges in heat dissipation as power densities in chips have continued to climb. Solving these problems takes more R&D money every year. Companies can still scale up compute with more hardware, but the power grid scales slowly where it scales at all and the supply of water for cooling scales even worse where it scales at all. Personally I'd rather not live in a world where average people are out-competed for drinking water by giant tech companies, because they are chasing economic dominance the likes of which Nestle could never dream of. There is a reason they're willing to throw money at A.I. with wild abandon, and that reason is they think they'll be able to direct huge swaths of human wages into their coffers. Software breakthroughs are much harder to predict, there may be further breakthroughs in efficiency there. Maybe some new breakthrough will happen in hardware, but wide deployment will take time and even more money.
youtube AI Governance 2025-08-26T16:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwm68MALyX4azap4IN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz0HyYtSghRnpLPtRF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKr3IZk6iHO7VUO5p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyBeeQz0s2htc1MPTt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzb3ixO1zczy632JjJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]