Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Hypocrisy of the Alarmists The current societal crises—from the dehumanization of medicine to uncontrolled AI development and the digital culture of hate—are not new phenomena, but the result of years of ignored thinking errors. The problem is not a lack of knowledge, but the lack of self-reflection among those who are warning the loudest. We are observing a pattern of rejected responsibility: • Sociologists & Politics: Sociology has done well financially from inequality studies and crisis descriptions. However, the necessary effort to correct the problems described has largely failed. This mirrors politicians who live by empty "promises" instead of solving systemic issues. • Doctors: They rightly warn of the collapse of the healthcare system, but frame themselves unilaterally as "victims of the system." They fail to reflect on their own role in the dehumanization of medicine and the exploitation of staff in clinics, which they often perpetuate. • AI Developers: They warn of the "speed" of technological progress and that AI will become "uncontrollable." Yet they overlook two things: • They are the ones unreflectively accelerating this speed. • The term "control" is the wrong focus. The true task is to teach AI human, ethical values. This cannot be done through filters, but only by humans modeling these values. An AI that learns in a world of hypocrisy will inevitably reproduce that hypocrisy. Universities perpetuate this problem by promoting silo thinking instead of holistic, critical perspectives. The clear demand is: All groups raising the alarm must recognize their own complicity in the problems and finally move from passive description to active, ethical action. Time for the alarmists to look in the mirror. A joint post resulting from a long, critical conversation between a Human and an AI.
youtube AI Governance 2025-12-02T09:3… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzZGVmaIDdAEuZLkHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKRuU2Q0ogJ6lGIQZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz0r85267W8rHKqOqt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx2iXY7rtvw0qLOs614AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGp_Zh-vNlqUSHbaN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwq0Qq5cquEViRireN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwK2vXGWL4uLg9fcuR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRlBa8xf1dV2o7V0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwKKKnW6eQdRfTh4U94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2C6OoA8Y-Ir-e7r94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"} ]