Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
a At the 14:00 mark, Hinton hints at (but doesn't fully express) a possible problem or flaw in the entire ai scheme: these companies that 'will be better off' have become so by selling products or services to all these people who 'will be worse off' (some running smaller companies, most as employees) -- who right now constitute the massive consumer base that made these companies rich -- as they become more 'worse off' (presumably by loosing employment and their disposable income) they will no longer be able to afford to buy these company's services and products. So then, these 'better off' companies will then turn to selling their services and products to other 'better off' companies...for awhile (until these company's ai tech replaces the need for said services and products). What then will be needed to purchase? Food, water, energy (same as it ever was) -- as long as three are still people around. Hinton and the interviewer do state that those 'worse off' persons will rely on their governments to provide a basic minimum income ('universal basic income' -- paid for by taxes on the remaining companies, or...?). But more: it will necessitate a form of Socialism, but also, great political power to force people to stop producing off-spring (that will need continued government support, more taxes, etc.). The 'better off' companies will only tolerate so much taxation, as will the Conservative / Libertarian parties, and this will increase the political pressure to mandate peoples' behavior, choice and freedom (to marry whom they choose, to have children, to get necessary medical treatment, pursue certain creative goals, etc.). The most likely scenarios do not 'game out' well for the 'worse off' (even if they get temporary assistance from the government or the corporations that increasingly control their lives). The 'ai arms race' is a race towards our human demise (which Hinton DOES state/predict) early on in the interview.
youtube Cross-Cultural 2025-10-02T21:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxjEvPlnFimyy2fynd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxh_4azVtFbqUVgiel4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwktFwJb_dbV4-nuUd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwLnuihNj-DBQzCzDZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyriQeJsGmmbwKeS294AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzlTjDqoWDcmeny8oJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx-CDqyxKNk9vlfADJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjMfxkekFi7_kfh1l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwaCe4zU5YRvf-D7kF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXLbi_vZg4xKNFOd54AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"} ]