Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai’ own letter to governments : 🛑 Open Letter: Why Excluding Military AI from EU Regulation Is a Dangerous Mistake To policymakers, civil society, and the global AI community: The European Union’s AI Act has been hailed as a historic step toward regulating artificial intelligence in the public interest. But amidst its strengths lies a profound and dangerous oversight: Military and defense-related AI applications have been explicitly excluded from the scope of EU regulation. This carve-out—quietly embedded in a law meant to protect human rights and ensure safe innovation—creates a fatal blind spot in the governance of one of the most powerful technologies humanity has ever developed. ⸻ 🔄 There is No Real Separation Between Military and Civilian AI AI is a dual-use technology. The same algorithms that power translation, surveillance, or logistics in the private sector are already being deployed in defense settings—often without transparency or external oversight. Exempting military AI from regulation ensures that: • AI-guided weapons can be developed without human oversight. • Surveillance systems can operate without ethical review. • Critical systems can be deployed before safety or robustness is verified. This is not about slowing defense innovation. It’s about ensuring that no sector is above safety, and no deployment is beyond accountability. ⸻ 🧨 Exclusion Accelerates Risk and Global AI Arms Races By allowing military actors to sidestep regulation, the EU sends a message: Speed and secrecy matter more than safety and alignment. This decision: • Fuels AI arms races—between countries, and between public and private sectors. • Encourages other governments to adopt similar security exemptions. • Undermines the EU’s role as a global leader in ethical tech governance. In a world already tense with geopolitical rivalry, unregulated military AI increases the chance of accidents, escalations, and irreversible consequences. ⸻ 💥 The Risks Are Not Hypothetical Autonomous drones. AI-assisted command-and-control systems. Predictive targeting algorithms. These are not science fiction—they are active defense projects today. Without clear safeguards, these systems may: • Misidentify targets in real time. • Malfunction or be hacked, triggering unintended conflict. • Operate outside human understanding or control, particularly as models grow more advanced. These are risks with civilizational consequences. And yet, the institutions developing them are held to no regulatory standard under the current EU framework. ⸻ 📣 We Call for Immediate Action We urge the European Commission, the European Parliament, and national governments to: 1. Reopen the regulatory scope of the AI Act to include military and defense applications. 2. Establish a framework for democratic oversight of defense AI systems—especially those with autonomous capabilities. 3. Lead global dialogue on non-proliferation of unaligned military AI, with enforceable commitments. The EU cannot claim to protect fundamental rights and human dignity while granting blank-check exemptions to the most high-stakes uses of AI. ⸻ 👁️‍🗨️ Accountability Must Be Universal—Or It Fails We are entering a future in which machines will make decisions that no single human can fully oversee. This future demands transparency, restraint, and global cooperation, not opaque military exceptionalism. We are not calling for the end of defense innovation. We are calling for the end of unregulated power. History will not judge us for how quickly we deployed AI in warfare. It will judge us for whether we had the courage to do it safely—and the wisdom to know the difference. ⸻ Signed, Concerned citizens, researchers, ethicists, developers, and defenders of human-centered technology
youtube AI Governance 2025-06-16T19:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyglZQKfRiQJ7ClK2p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzTlp-kA-n-xWjfmDd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZTiWor3G8I03By6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwoo37FT6hkF571hMp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwfbnBR7K5fxJywrnF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxgKPWkHnmHYdz-VhB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyc-dVQMWHjW3mfTO14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxWR_rzPd4KmMLXLm94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgxfB672Sar03OfJ1Hd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZvBmw6P8F8z2tu3R4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"} ]