Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really worry that we're reaching a point like the famous Jurassic Park line that these people are spending so much time wondering if they can do a think they never stopped to consider if they should. And to that end, I don't just mean from the point of view of things discussed here, such as the AI induced psychosis or the AI will choose to kill a human if it means not getting itself shut off. But even more basic things that could have been identified if they just had an understanding of our not that distant history. People talk about AI ushering in this age of utopia where we will have all this leisure time and get to do what we love and AI will do all the work etc etc and they paint it as this really cheery future. But in reality what we are more likely to see is employers replacing human workers with AI, leading to massive unemployment, recession, and company after company going bankrupt as they lose customers (due to people being unemployed) and exacerbating the vicious cycle of decline, not unlike the Great Depression. Because let's be real, companies aren't going to keep you on the payroll to have AI do your job (unless you're an executive) Our society is not one where it can withstand even the most optimistic version of AI without causing immense suffering for billions of people the world over.
youtube AI Moral Status 2025-11-04T08:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzclhE4TOWwhLFmUt54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy95YWZ1EF0k2Ykit54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyeXqh9IHoaEmy1mtd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0V_9BWHp9y4OunGF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxHZnNlTE_9rbUbhCd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzkxs-frzjOz-fiY3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw9iuJxfpKyqHeoo754AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFvySOeZK-zuiZy_J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxfLvwOJ8LYTRt_o9d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhiMSVz08AW1X0Szl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]