Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think a flaw in people's reasoning about the risk of AI to jobs is this: they worry about people losing jobs, yet reason from the world as it is, not as it will be when AI penetrates society. Companies abide by the principle of survival of the fittest. The companies best able to produce value will grow and prosper, while companies that lag behind will cease to exist. The most productive company isn't run by a boss who sees AI as a tool to fire his 10 employees, but as a means to have his 10 employees manage 10 AI employees each. That will be a company able to produce the work of 100 people, and have a clear competitive advantage. We saw the same thing happen with computers. At first, it looked like it would automate and speed up much of the work people did, resulting in people having to work less. This did not happen. People were able to do more in the same amount of time, and as a result, the world became more complex and diverse. I bet the same will happen with AI. It will catapult us into a new phase of our existence that is vastly more complex, diverse, and specialized than we can imagine. There will be plenty for each of us to do in that world. The question is what purpose we will use this new productivity for. We could dedicate it to solving the energy crisis and restoring the global natural environment. However, if we use it to produce more stuff, we may benefit from the new riches AI productivity has given us for a while, but we will deplete the world's resources. I guess we'll see.
youtube AI Jobs 2025-11-18T10:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxhLwZmUJ2F8IeWw0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgywqxswB7s8wx_o4N54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXLfq2JEaNEXekPvV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgykwYTEtYFZcuOv3bJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwzPvhKUQx7LJ8jIh54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzyzfL2mV9cHSOb1hN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXSK_N3-Eedu7tCNZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyN9fsVoWAxyYs5_954AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZUK9pnCaEKUItEYR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzMj7WBXIydGrv-2-p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]