Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Automation has historically raised overall wealth but is clearly bad for individual workers whose tasks get automated away. AI agents could automate more than half of US work hours, manual and cognitive, potentially pushing unemployment to 10–20% in the next few years. There is a hump‑shaped relationship between automation and wages: a bit of automation helps workers, but near “full” automation, it pushes wages down sharply. If AI and robots surpass human cognitive abilities, there may simply be no new uniquely human tasks left to move into, unlike past industrial revolutions. In that world, labor stops being the key bottleneck; instead, scarce things like energy or critical minerals could capture most of the gains from growth. Rapid AI progress could trigger an “intelligence explosion,” with machines driving science and innovation much faster than human minds ever could. Without new ways of sharing income, wages for many tasks could fall toward machine cost (for example, an essay that costs a human 50 dollars and an AI less than 1 dollar). He argues we may need something like a universal basic income, and proposes a “seed UBI” (a very small payment started early) that can scale up automatically if disruption grows. The AI industry has massive fixed training costs, pushing it toward natural monopolies or a small oligopoly, so regulators should watch for vertical integration and lock‑in. In the best case, if we align powerful AI and share its gains, we could reach a world of “universal high income” where work is largely optional, but getting there is a major political and ethical challenge.
youtube AI Jobs 2026-04-15T11:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxxa84H4JEePADtvGh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz71kv5FSgHOjtMcC94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzx0gzQVJ_lZtpaIg54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwHCYW0qA8LTPcsjAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxF4UQAc5e4fL0Plmh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxnIC6hDqecdqRD7xF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxxd3zFdfpJk_4uyrV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwAUHCRFIHo9UiZfZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyMDM_PdufPwlIQ2xB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxUeaMbCWZ30P7F7ht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]