logo
#

Latest news with #GGE

AI in the military: Legal framework
AI in the military: Legal framework

Hindustan Times

time2 days ago

  • Business
  • Hindustan Times

AI in the military: Legal framework

As India pushes forward in its efforts to become an ethical and strategic leader in military Artificial Intelligence (AI), it is important to evaluate where we stand globally. In an earlier article, we looked at how India rolled out frameworks like DRDO's ETAI and the large-scale IndiaAI Mission, laying the groundwork for responsible, transparent, and effective adoption of AI in defence. We also saw the pressing need to turn robust paper policies into real-world practice, moving beyond agency silos to coordinated ethical governance. Representational Image(Pixabay) Globally, the debate surrounding AI in military has moved beyond just innovation. It is now focussed on ethics, rules, and trust. Whether it is NATO capitals, G7 forums, Pentagon briefings, or United Nations (UN) meetings, countries find themselves returning to the same fundamental questions: Who is responsible, how will human oversight be exercised, and how do we ensure AI systems are reliable, explainable, and accountable? Within NATO, member-States signed onto six guiding principles for responsible use of AI in defence in 2021 (revised in 2024)—including strict commitments to lawfulness, responsibility and accountability, explainability and traceability, reliability, governability and bias mitigation. These were homegrown concerns, not imposed externally. The US formalised its Ethical Principles for AI in 2020, after extensive input from civil society and industry. Its five central values—responsibility, equity, traceability, reliability, and governability—are now built into procurement and deployment at every level. Traceability, for example, means not just tracking how an AI arrived at a decision but equipping commanders to actually understand those decisions on the battlefield. Governability makes sure that, at any sign of AI going rogue, humans retain a reliable way to step in and shut it down. All of these principles, while guided by ethics, are enforced by contracts, project reviews, and documented audits. The UN, too, has taken up the call. Since 2016, the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) has debated possible new rules for weapons that can select and engage targets on their own. While a treaty still has not emerged, the GGE did settle on eleven guiding principles in 2019: international humanitarian law must apply to all weapon systems, and meaningful human control and legal accountability are non-negotiable. India's voice in these talks has always been balanced and pragmatic. Instead of supporting a total ban on autonomous weapons, India has argued that we should judge any technology by its real use and impact, not just whether it is autonomous. Indian representatives have also highlighted a basic issue: There is no universally accepted definition of lethal autonomous weapons, so a rigid treaty now might be premature. Turning to Europe, the EU recently enacted a legislation for regulating AI with requirements based on risk levels. It is expected to set a global standard for AI governance, similar to how the EU's General Data Privacy Regulations (GDPR) influenced data privacy worldwide. Defence, though, is a carve-out: Article 2(3) says the law does not apply to AI systems created purely for military or national security purposes. Back home, India's approach is still young but moving in the right direction. The Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA) have accelerated efforts to bring trustworthy AI into the services. The DRDO's ETAI Framework sets out clear rules: reliability, safety, transparency, fairness, and privacy. These principles guide how AI is built, tested, and used, echoing the values articulated by NITI Aayog around accountability and inclusivity. Despite having strong institutional backing, though, enforcement is still mostly through internal standards and practices and not legislation. That said, India is not alone in this approach. The US weaves ethical principles into acquisition and contracts; NATO builds AI testing and validation boards; the EU formalises principles into law (though mostly for non-military uses). India has all the right institutions—MeitY, DRDO, DAIC—but needs to take the next step from policy to enforceability. Looking ahead, India can seize three opportunities. First, embed ethical principles into procurement and deployment protocols for all military AI systems. Second, create a permanent AI inter-agency commission with representation from all key stakeholders like the armed forces, DRDO, MeitY, and legal experts. Third, stay deeply involved in shaping global standards; not only from the perspective of protecting its own interests, but to help set them. In a world where civilian and military AI are intrinsically linked, and capabilities are evolving fast, it is simply not enough to try to regulate after the fact. India has the chance to lead with a robust, clear, and enforceable framework that balances security with ethics. In the new era of military AI, trust, transparency, and clear rules are not limiting, they are the true sources of strategic strength. This article is authored by Zain Pandit, partner and Aashna Nahar, associate, JSA Advocates and Solicitors.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store