Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm trying to see how some people can't see the vision of AI. People can't see exponential gains, and AI is exponential gains stacked on exponential gains. The reason I don't think this bubble will pop is that bubbles are made of money, and the people building AI realize that money won't matter after AI. People also think that the companies will make money from AI. Here is a thought. Let's say you want a pound of hamburger. Your AI can search all your local stores for their prices and buy the lowest-priced pound. So, for stores, if your hamburger isn't on sale, you don't sell it. Not only that, but more farmers will be able to handle more cows, which means more beef. You can extract this repeatedly. Farmers could have drones sampling their fields constantly and optimizing water, nutrition, and yield. If the robots cooking meals can place an order for the hamburger and robots are delivering it, then you won't have as much wasted hamburger. It stacks and stacks and stacks. What if you have a pound of hamburger in your fridge you don't need, your robot could broadcast to your neighbors, and their robot could ask for it. The thing is, the future world looks really different.
youtube AI Jobs 2026-01-19T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyVBQiI6PGErWycEIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugxl-IAad43NABb6Vst4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgycY6fsJsL3BXdg0CJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxSFrKmf_iR6uYhoeN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzifOfq6W-l1Rlimld4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwm6lfIZCjoKGOyzQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2PzQhMSBJEp51_M54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx9C2d6AecMIZqfyMB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwF3QJE_l7MHb6Hkyl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwqJeEKWXUq-6XtoiN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]