Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a total absence of economics in this analysis. It takes at least five years to build a factory to make robots. They will be very expensive. They will need to be specialized for thousands of jobs. Dreadful economies of scale. Designing and building robots that can climb a transmission mast and make repairs will remain uneconomic for a long time. Same for plumbers who can dive under your sink. Moving dirty dishes into the dishwasher at McDonald's - would it be worth using a robot? Manufacturing is already automated - AI can not add much more. Robots are mechanical systems - they fail, inevitably. More expense and down-time risks. The prediction of catastrophic unemployment is more realistic regarding white-collar jobs. Here the solution is to reduce working-times, gradually, and give people and the general culture time to cope with the improved work-life balance. AI can't hike in the Appalachians or enjoy a long meal with friends and family. Or enjoy listening to Beethoven. There will even be time to educate Americans. But America is doomed because of its backward values and its lead in AI. The cornucopia of increased wealth requires heavy taxation to spread its benefits, and maintaining salaries as working hours decrease. Cf. the curse of raw materials in countries like Nigeria, and compare it to Norway. No advanced country is less prepared, mentally and politically, to capture the benefits of AI. Being the first adopters, with a polity totally corrupted by tech and financial "elites", the US will continue to favor greed and accept wide-spread misery. China and Europe, each in its own way, have chance.
youtube AI Governance 2025-09-06T05:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwswHnHaifaBBOte-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwF2WpH_3HooUeTvU94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxaeBMBwph_YWof71Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_r7SSSR6EPrKJri54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzvTIt7IFCS6LI_mG54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw4LjCV4WInSBu24qR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzOXUfUZHSbbzW5FFx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzUj-5rmZZNE41vy4d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwVpvAFzdxRpdpj0fV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugx96vFLHJct_SAlK494AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]