Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree with the argument at around 4:45. While Ai could make us much more productive, this doesnt mean exactly that we have to work less. My Theesis for that is that if you look at how much a individuals productivity has increased in the last years, and the average working hours per week you dont get any correlation between increased productivity and less time worked. So my argument is that we will become much more productive, we will all still be working roughtly the same time and will still get only a slightly higher pay which will be too low in correlation to the increased productivity of the individual. One point I do want to give you is that goods may become cheaper with a more automated production, but not in such a vast way that we can live a luxury lifestyle we cant even dream of rn, as u put it. So in the end, as harsh as it sounds, very little changes for the normal person since the profit of the higher productivity wont be given to the individuals to a fair extend, but will stay at the top. If productivity was linked to more pay then we should have minimum wage of aroud 20 $ in the US today but its not its 8$ or so. Which means that the ca. 12$ „productivity“ difference goes to the bosses on top. Not the individual. PS: sry for my bad english lol
youtube AI Jobs 2025-12-27T09:0… ♥ 8
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxHpqczDcxgFRAVy7J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxKdgrVYheZjSY8wTJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzojYTo1_oR7SpsNRN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyj3Z07IKPfFQMHJLh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzNxuY9cn68_I4IHOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNDgX_g_bBlMmRLd14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw9hJQDg-qfpEw3Xzh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwUQpHGWd8tfgVX4Wt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKecgs-Fsv1f7PGoF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgykFNPsDiJNY_NFDB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]