Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Keeping humans employed is a bad reason to limit AI, for many reasons. Automation is nothing new, it's being going on since the industrial revolution. The only change is that it's now coming for creative jobs too. And as we saw in all other jobs that got automated, the result is temporary hardship for some people, who lose their jobs, but a long term benefit for everyone, including them. We today live in unimaginable luxury compared to times before automation. In many ways a poor person is the developed world has higher standard of living than kings in middle ages. Eventually all jobs will be automated. It's inevitable, fighting against it just does more harm, simply because the economic benefits of AI are way too big, and it's very easy to create. Resisting will only make sure that someone else will enjoy the benefits, and you lose your job anyway. What we have to decide very fast is how to handle that situation. When human labor is no longer needed, how we organize society, and especially the economy. In current American capitalist system everyone who depends on a paycheck would starve, and the collapsing consumer demand would bring down the rest of the economy. On the other hand a fully automated economy can create practically limitless wealth, we just have to figure out how to distribute it. One already existing idea that would help is universal basic income.
youtube 2023-02-08T20:0… ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxnu3TWJA-vewTn_Lp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4dPA8rh6pkzMW2K14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxLnE2ZZLAY3RkLOkB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz4Bo8zNoZqqJTQbe14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx6hcduxUs7edHBCZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwzHJqRbQOrIu90AC14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxLzsRVK70fDT1OD_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZ45IpRLDJnd_S8D14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1zOcsLBnpWEhSakd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7ZiLj9YNV3-o5WNZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]