Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When one of the pioneers of AI — someone who helped build the technology itself — publicly warns about its risks, we should listen. AI has extraordinary potential to advance medicine, education, and human productivity. But without strong ethical guardrails, transparency, and accountability, it can just as easily amplify misinformation, erode privacy, and concentrate power in dangerous ways. This isn’t about slowing innovation. It’s about setting a high bar. We have an obligation to demand that governments and companies develop and deploy AI responsibly — in ways that advance humanity rather than harm it. If we don’t insist on ethical standards now, the consequences won’t just be destructive to others. They’ll ultimately be self-destructive to the very society AI is meant to serve. Right now, AI governance is fragmented. Some standards are mandatory (laws like the EU AI Act). Many are voluntary frameworks. Enforcement varies widely by country. This is why public pressure matters. Ethical AI won’t happen automatically — it requires coordinated regulation, technical standards, corporate responsibility, and informed citizens who demand accountability.
youtube AI Responsibility 2026-02-24T14:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxzsr4GVmTucVz6rbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxK78oatKYQLDijCMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1jMMzXw8Xx8kRoiJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwLjUyYK-hKXnJlIbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzlx__mzd5bTF1rH_B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyFdGChB0p6hnkswV94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxBWEbP2c1fH90efVB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw7i6dBWBYLJj4bvUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxxp2-6fYiEnX-jQvV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz71qFAXQtTqxvTc2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]