Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Looking back at the records, we can observe that in human history it was always true that the Sun has always risen every day (though the period is not once a day on the poles, and there were eclipses). So we could theorize that it will be true for tomorrow and every day after that. But it is not! (It doesn't matter for us because it's not gonna effect us, but at some point far into the future our Sun will turn into a red giant and no longer rise once per day.) So I would really like to know what does Neil bases his belief that similar relative number of humans will still have (some kind of) jobs in the AGI age! Because from my example we can see that just because something was true in the past it does not logically follow that it will in the future. Not very scientific from Neil, I'm a bit disappointed. :( If we really take a closer look at human history then we can see that the industrial revolution was not all good, especially for the people who worked during it. There was a long period of job loss and a lot of pain before things got better. ...and the profit increase that industrial revolution caused was not distributed equally. So probably the AI revolution won't be painless either. And mostly people already in power will be the ones benefiting from it (if everything goes right and AI does not cause some human civilization ending problem).
youtube AI Moral Status 2025-07-24T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzbyx5HUWHWtCKRtfN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzvO-RBDhtz5TY_0Q14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwZkSVWhLkzOEcVLgd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-gRL5lA0Vw2xFaCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmTkSdVq60o-Z1Tih4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw2s4XjBi3C0fqpY7x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxwd0sUF25u5dDZE-l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxBcLF4g2BSMD7gl_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyjexmf5ZvfuHSFynZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyitefYd4ZkNP-NwAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]