Latest news with #LAWS


The Herald Scotland
28-07-2025
- Politics
- The Herald Scotland
Why we must keep humans at the heart of AI in warfare
Since 2016, discussions of the Convention on Certain Conventional Weapons Group of Governmental Experts on LAWS have been ongoing, but International Humanitarian Law (IHL) still lacks any specific, binding regulations relating to AI. As noted by International Committee of the Red Cross (ICRC) President Mirjana Spoljaric, AI in war is 'no longer an issue for tomorrow', but rather 'an urgent humanitarian priority today', requiring the immediate 'negotiation of new legally binding international rules'. Accordingly, United Nations Secretary General António Guterres recommended, in his 2023 New Agenda for Peace, that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026. Read more The ICRC has stressed that responsibility in warfare must remain with humans. 'Human control must be maintained,' it argues, and limits on autonomy urgently established 'to ensure compliance with international law and to satisfy ethical concerns'. In 2022, the MoD itself echoed this sentiment. It stated that only human soldiers 'can make instinctive decisions on the ground in a conflict zone; improvise on rescue missions during natural disasters; or offer empathy and sympathy.' The then Defence Secretary Ben Wallace added that 'at its heart, our Army relies on the judgment of its own individuals.' A recruitment campaign at the time carried the tagline: 'Technology will help us do incredible things. But nothing can do what a soldier can do.' Colonel Nick Mackenzie, then Assistant Director for Recruitment, highlighted that, while 'technology is really, really important… there is always somebody, a person, behind that technology,' who is ultimately responsible for its use and the decisions it enables. Since then, however, the use of AI-enabled rapid target identification systems in contemporary conflicts has grown rapidly, with notable examples being Lavender and Where's Daddy (Israel/Palestine), Saker and Wolly (Russia/Ukraine). A human being is generally still required in order to engage any lethal effects, but technological capabilities are already being developed to remove human input from the targeting process altogether. Against this backdrop, the MoD's Strategic Defence Review 2025, released last month, calls for 'greater use of autonomy and Artificial Intelligence within the UK's conventional forces' to deliver 'greater accuracy, lethality, and cheaper capabilities'. 'As in Ukraine,' the Review continues, 'this would provide greater accuracy, lethality, and cheaper capabilities – changing the economics of defence.' One example is Project ASGARD, which will help the Army locate and strike enemy targets at greater distances using AI as a 'force multiplier'. This is just one of over 400 AI-related projects being run by the MoD. What remains unclear, but is critical from a legal and moral perspective, is what role human judgment will play in these projects and the military operations they support. Computer scientist Pei Wang has said that while AI can behave like human intelligence in some ways, it is fundamentally different. AI shouldn't replace human intelligence, but rather support and enhance it – helping people make better-informed decisions. Human-robot interaction specialist Karolina Zawieska warns of the need to distinguish between what is human and what is only human-like. AI systems often function as a 'black box', meaning it is not always clear how or why they produce certain outcomes. This creates serious problems for human understanding, control, and accountability. When properly used, AI can support situational awareness and help human operators make better decisions. In this sense, it is a tool – not a decision-maker. But if too much control is handed over to AI, we risk removing human judgment and with it, moral responsibility. Professor Jeff McMahan, moral philosopher at the Oxford Institute for Ethics, Law and Armed Conflict, has argued that it is essential for combatants to feel 'deep inhibitions about tackling non-combatants'. However accurate or efficient AI may be, these inhibitions cannot be replicated by algorithms. As political scientist Valerie Morkevičius has pointed out, the emotional and moral 'messiness' of war is a feature, not a flaw because it slows down violence and prompts ethical reflection. Military decisions should be difficult. This is why human judgment must remain at the centre. While defence and national security are reserved for Westminster, Scotland plays a key role in UK defence, from the bases at Faslane and Lossiemouth to the defence research carried out at Scottish universities. The issues raised in the Strategic Defence Review therefore carry particular relevance here. UN Secretary General António Guterres has recommended that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026 (Image: Getty) Scotland's approach to AI, shaped by the AI Strategy (2021) and the Scottish AI Playbook (2024), is notably human-centred. Informed by Organisation for Economic Cooperation and Development's (OECD) principles, both documents stress the importance of trustworthy, ethical, and inclusive AI that improves people's lives. They highlight the need for transparency, human control, and robust accountability. Though not military in scope, these principles nevertheless offer a useful framework for a Scottish perspective on the development and use of AI for military purposes: keeping people at the centre, and ensuring that technology supports rather than replaces human agency. The goal should not be the delegation of human decisions to machines, or the replacement of human beings with technology. Rather, AI should support and strengthen human decision-making – a tool for the enactment of human agency: a technological means for strictly human ends. Dr Joanna LD Wilson is a Lecturer in Law at the University of the West of Scotland


Hans India
16-07-2025
- Hans India
Sapta Shakti Tech seminar begins in Jaipur
Jaipur: The much-awaited technical seminar titled 'Next Generation Combat – Shaping Tomorrow's Military Today' commenced on Wednesday at Jaipur Military Station. Organised by the South Western Command in collaboration with the Centre for Land Warfare Studies (CLAWS) and the Society of Indian Defence Manufacturers (SIDM), the event has been conceptualised by Lieutenant General Manjinder Singh, General Officer Commanding-in-Chief, South Western Command. Delivering the keynote address, Lt Gen Singh emphasised the transformative role of science and technology in building a Viksit Bharat. He stressed the urgent need for the Indian Army to continuously innovate in response to evolving threats and emerging warfare paradigms. Addressing modern challenges, he spoke about the complexities of Grey zone warfare and the rise of 'hybrid threats' that blur the lines between war and peace. He highlighted the pivotal role of advanced systems, precision munitions, enhanced ISR (Intelligence, Surveillance & Reconnaissance) capabilities, and drone warfare, particularly in the success of Operation Sindoor. The Army Commander underscored the transformative potential of Artificial Intelligence (AI) in decision-making, operational efficiency, and resource optimisation. He also emphasised the importance of ethical frameworks, human oversight, and compliance with international humanitarian law in the deployment of Lethal Autonomous Weapon Systems (LAWS). A landmark moment was the signing of an MoU between South Western Command and Malaviya National Institute of Technology (MNIT), Jaipur, to promote joint indigenisation and R&D in defence technology. Day one of the seminar focused on the implications of an AI-powered battlefield. Discussions explored next-generation solutions like 'hypersonic and directed energy weapons, advanced cyber and electronic warfare systems,' and 'soldier-centric innovations' such as exoskeletons and AI-based battlefield management tools. A dedicated defence industry exhibition, the 'Sapta Shakti Symposium,' was also inaugurated, showcasing cutting-edge equipment developed to address real-time field challenges. Coordinated by SIDM, it saw enthusiastic participation from leading and emerging defence manufacturers. The insightful deliberations and rich technical exchanges set a promising tone for the second day of the seminar.

08-06-2025
- Politics
Objectivity Seen as Key to Screening AI Weapons
News from Japan Society Jun 8, 2025 11:46 (JST) Tokyo, June 8 (Jiji Press)--Japan's Defense Ministry has compiled guidelines on ensuring appropriate human involvement in the research and development of defense equipment using artificial intelligence. While the guidelines are expected to cover R&D activities on equipment including unmanned combat-support drones and unmanned ships, how objectivity and reliability should be secured remains a key challenge as such activities are screened by officials at the ministry. The effectiveness of the guidelines also hinges on to what extent private-sector companies participating in R&D programs disclose AI data concerning intellectual property. How to regulate lethal autonomous weapon systems (LAWS), which attack targets after AI identifies and selects them, without human involvement, is being discussed at the United Nations. The Japanese government takes the stance that it has no intention to develop lethal weapons that operate completely autonomously without human involvement or to conduct R&D on defense equipment whose use is banned under international and domestic laws. [Copyright The Jiji Press, Ltd.] Jiji Press


Yomiuri Shimbun
07-06-2025
- Business
- Yomiuri Shimbun
Japan Govt Unveils Guidelines for Managing AI-Incorporated Defense Systems; Aims to Cancel Research of Systems Deemed High Risk
Yomiuri Shimbun file photo Prime Minister Shigeru Ishiba, left, receives an explanation of the next-generation fighter jet at DSEI Japan 2025, an international defense and security equipment exhibition, in Chiba Prefecture in May. The Defense Ministry has unveiled guidelines for managing the risks associated with defense equipment incorporating artificial intelligence, with the aim of ensuring the use of AI remains within the scope of human control. The guidelines clearly state that the government will not permit the research and development of defense equipment if it is found to be Lethal Autonomous Weapons Systems (LAWS) , in which a human is not involved in selecting targets or deciding which targets to attack. According to the guidelines, risk management for research and development must be conducted in three stages: classification of AI equipment, a legal review and technical review. Equipment will be examined under these guidelines based on how the judgement of the AI system impacts destructive capabilities, dividing research and development targets into high-risk and low-risk categories. If deemed high-risk, the government will assess compliance with international and domestic laws prior to the commencement of research and development. This includes missile launches that are assisted by AI to identify targets. If deemed LAWS, the system's development and research will be canceled. After the legal review is complete, the process moves on to a technical review. This stage verifies that the design allows for human control and ensures safety through mechanisms that reduce AI malfunctions. To ensure an effective review, the ministry will need the cooperation of defense contractors that design equipment incorporated with AI, requiring them to disclose AI algorithms and other relevant information. The ministry plans to finalize the specific methods for ensuring cooperation through future discussions with the companies.


Japan Today
06-06-2025
- Politics
- Japan Today
Japan sets guidelines for expansion of AI-controlled defense systems
Japan has set guidelines for the safe development of artificial intelligence-controlled defense systems, Defense Minister Gen Nakatani said Friday, aiming to address ethical concerns over weapons that can operate without direct human involvement. The guidelines outline steps to be followed in the research and development of such defense equipment, calling for careful classification of the systems, legal and policy reviews to guarantee compliance, and technical evaluations of operational reliability. Nakatani said the guidelines are intended to "reduce risks of using AI while maximizing its benefits," adding they are expected to "provide predictability" for the private sector, with his ministry to "promote research and development activities in a responsible way." Global concerns over autonomous weapons that use AI are mounting, as the deployment of combat drones has become commonplace in the war between Russia and Ukraine and in conflicts in the Middle East. The Defense Ministry will conduct reviews to check whether systems meet requirements such as clear human accountability and operational safety, while categorizing such weaponry as "high" or "low" risk. If categorized as high risk based on whether AI influences destructive capabilities, the ministry will assess whether the equipment complies with international and domestic laws, remains under human control, and is not a fully autonomous lethal weapon. The ministry unveiled its first-ever basic policy for the promotion of AI use last July, focusing on seven fields including detection and identification of military targets, command and control, and logistical support. Last May, the Foreign Ministry submitted a paper on Japan's stance on lethal autonomous weapons systems, or LAWS, to the United Nations, stating that a "human-centric" principle should be maintained and emerging technologies must be developed and used "in a responsible manner." © KYODO