Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To me, the main problem with AI is that it replace humans too well in a capitalist society. Our worth is measured on how much we can produce and how unique it is. AI can copy us and will probably outperform us in the future both in quantity and quality. Since out society is ruled by an elite they wont be affected and will simply push humans out of the system. There is not much we can individually do. Humans need to get rid of the rich elite, tame cultural and religious demons, set education as a way to make peoples that can think critically instead of educated laborers and change how we value human worth since we are not in a situation where survival should be hard anymore if not because of a growing overpopulation. Revolt is an option but most people lack understanding and vision to root out every bad weed and then organize a better society. Ultimately my only hope is that an AI will take over humanity and that by sheer luck it will optimize human personal growth, individual satisfaction and civilization progress with a strong emphasis on avoiding extreme cases. Not impossible but very unlikely. Humans future is probably closer to what many science fiction writers designed as dystopian civilizations. Have a nice day.
youtube Viral AI Reaction 2025-09-08T00:4… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyaWRaJxgkM9Fig6HN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBO_k_004n31l9pe54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZ-446u5sqDydluW54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxHgNwI6iJcjHvmAZt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgwiIZMSNdsWFEkkg614AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyIRmE3wFoSbNlqGHR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwp_mMx4GTVPrHJLVN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxU-dA5wXmQa7iUDdp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgywgyybtHK37LZ3y1R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyRyZU2L0UQ2EvR5AR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]