logo
#

Latest news with #ConventiononCertainConventionalWeapons

Why we must keep humans at the heart of AI in warfare
Why we must keep humans at the heart of AI in warfare

The Herald Scotland

time28-07-2025

  • Politics
  • The Herald Scotland

Why we must keep humans at the heart of AI in warfare

Since 2016, discussions of the Convention on Certain Conventional Weapons Group of Governmental Experts on LAWS have been ongoing, but International Humanitarian Law (IHL) still lacks any specific, binding regulations relating to AI. As noted by International Committee of the Red Cross (ICRC) President Mirjana Spoljaric, AI in war is 'no longer an issue for tomorrow', but rather 'an urgent humanitarian priority today', requiring the immediate 'negotiation of new legally binding international rules'. Accordingly, United Nations Secretary General António Guterres recommended, in his 2023 New Agenda for Peace, that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026. Read more The ICRC has stressed that responsibility in warfare must remain with humans. 'Human control must be maintained,' it argues, and limits on autonomy urgently established 'to ensure compliance with international law and to satisfy ethical concerns'. In 2022, the MoD itself echoed this sentiment. It stated that only human soldiers 'can make instinctive decisions on the ground in a conflict zone; improvise on rescue missions during natural disasters; or offer empathy and sympathy.' The then Defence Secretary Ben Wallace added that 'at its heart, our Army relies on the judgment of its own individuals.' A recruitment campaign at the time carried the tagline: 'Technology will help us do incredible things. But nothing can do what a soldier can do.' Colonel Nick Mackenzie, then Assistant Director for Recruitment, highlighted that, while 'technology is really, really important… there is always somebody, a person, behind that technology,' who is ultimately responsible for its use and the decisions it enables. Since then, however, the use of AI-enabled rapid target identification systems in contemporary conflicts has grown rapidly, with notable examples being Lavender and Where's Daddy (Israel/Palestine), Saker and Wolly (Russia/Ukraine). A human being is generally still required in order to engage any lethal effects, but technological capabilities are already being developed to remove human input from the targeting process altogether. Against this backdrop, the MoD's Strategic Defence Review 2025, released last month, calls for 'greater use of autonomy and Artificial Intelligence within the UK's conventional forces' to deliver 'greater accuracy, lethality, and cheaper capabilities'. 'As in Ukraine,' the Review continues, 'this would provide greater accuracy, lethality, and cheaper capabilities – changing the economics of defence.' One example is Project ASGARD, which will help the Army locate and strike enemy targets at greater distances using AI as a 'force multiplier'. This is just one of over 400 AI-related projects being run by the MoD. What remains unclear, but is critical from a legal and moral perspective, is what role human judgment will play in these projects and the military operations they support. Computer scientist Pei Wang has said that while AI can behave like human intelligence in some ways, it is fundamentally different. AI shouldn't replace human intelligence, but rather support and enhance it – helping people make better-informed decisions. Human-robot interaction specialist Karolina Zawieska warns of the need to distinguish between what is human and what is only human-like. AI systems often function as a 'black box', meaning it is not always clear how or why they produce certain outcomes. This creates serious problems for human understanding, control, and accountability. When properly used, AI can support situational awareness and help human operators make better decisions. In this sense, it is a tool – not a decision-maker. But if too much control is handed over to AI, we risk removing human judgment and with it, moral responsibility. Professor Jeff McMahan, moral philosopher at the Oxford Institute for Ethics, Law and Armed Conflict, has argued that it is essential for combatants to feel 'deep inhibitions about tackling non-combatants'. However accurate or efficient AI may be, these inhibitions cannot be replicated by algorithms. As political scientist Valerie Morkevičius has pointed out, the emotional and moral 'messiness' of war is a feature, not a flaw because it slows down violence and prompts ethical reflection. Military decisions should be difficult. This is why human judgment must remain at the centre. While defence and national security are reserved for Westminster, Scotland plays a key role in UK defence, from the bases at Faslane and Lossiemouth to the defence research carried out at Scottish universities. The issues raised in the Strategic Defence Review therefore carry particular relevance here. UN Secretary General António Guterres has recommended that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026 (Image: Getty) Scotland's approach to AI, shaped by the AI Strategy (2021) and the Scottish AI Playbook (2024), is notably human-centred. Informed by Organisation for Economic Cooperation and Development's (OECD) principles, both documents stress the importance of trustworthy, ethical, and inclusive AI that improves people's lives. They highlight the need for transparency, human control, and robust accountability. Though not military in scope, these principles nevertheless offer a useful framework for a Scottish perspective on the development and use of AI for military purposes: keeping people at the centre, and ensuring that technology supports rather than replaces human agency. The goal should not be the delegation of human decisions to machines, or the replacement of human beings with technology. Rather, AI should support and strengthen human decision-making – a tool for the enactment of human agency: a technological means for strictly human ends. Dr Joanna LD Wilson is a Lecturer in Law at the University of the West of Scotland

‘Politically Unacceptable, Morally Repugnant': UN Chief Calls For Global Ban On 'Killer Robots'
‘Politically Unacceptable, Morally Repugnant': UN Chief Calls For Global Ban On 'Killer Robots'

Scoop

time14-05-2025

  • Politics
  • Scoop

‘Politically Unacceptable, Morally Repugnant': UN Chief Calls For Global Ban On 'Killer Robots'

14 May 2025 'There is no place for lethal autonomous weapon systems in our world,' Mr. Guterres said on Monday, during an informal UN meeting in New York focused on the use and impact of such weapons. 'Machines that have the power and discretion to take human lives without human control should be prohibited by international law.' The two-day meeting in New York brought together Member States, academic experts and civil society representatives to examine the humanitarian and human rights risks posed by these systems. The goal: to lay the groundwork for a legally binding agreement to regulate and ban their use. Human control is vital While there is no internationally accepted definition of autonomous weapon systems, they broadly refer to weapons such as advanced drones which select targets and apply force without human instruction. The Secretary-General said in his message to the meeting that any regulations and prohibitions must make people accountable. 'Human control over the use of force is essential,' Mr. Guterres said. 'We cannot delegate life-or-death decisions to machines.' There are substantial concerns that autonomous weapon systems violate international humanitarian and human rights laws by removing human judgement from warfare. The UN chief has called for Member States to set clear regulations and prohibitions on such systems by 2026. Approaching a legally binding agreement UN Member States have considered regulations for autonomous weapons systems since 2014 under the Convention on Certain Conventional Weapons (CCW) which deals with weapons that may violate humanitarian law. Most recently, the Pact for the Future, adopted in September last year, included a call to avoid the weaponization and misuse of constantly evolving weapons technologies. Stop Killer Robots – a coalition of approximately 270 civil society organizations – was one of the organizations speaking out during this week's meeting. Executive Director Nicole van Rooijen told UN News that consensus was beginning to emerge around a few key issues, something which she said was a 'huge improvement.' Specifically, there is consensus on what is known as a 'two-tiered' approach, meaning that there should be both prohibitions on certain types of autonomous weapon systems and regulations on others. However, there are still other sticking points. For example, it remains unclear what precisely characterizes an autonomous weapon system and what it would look like to legislate 'meaningful human control.' Talks so far have been consultations only and 'we are not yet negotiating,' Ms. Rooijen told UN News: 'That is a problem.' 'Time is running out' The Secretary-General has repeatedly called for a ban on autonomous weapon systems, saying that the fate of humanity cannot be left to a 'black box.' Recently, however, there has been increased urgency around this issue, in part due to the quickly evolving nature of artificial intelligence, algorithms and, therefore, autonomous systems overall. ' The cost of our inaction will be greater the longer we wait,' Ms. Rooijen told us. Ms. Rooijen also noted that systems are becoming less expensive to develop, something which raises concerns about proliferation among both State and non-state actors. The Secretary-General, in his comments Monday also underlined the 'need for urgency' in establishing regulations around autonomous weapon systems. 'Time is running out to take preventative action,' Mr. Guterres said.

UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow
UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow

Yahoo

time14-05-2025

  • Politics
  • Yahoo

UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow

Several nations met at the United Nations (U.N.) on Monday to revisit a topic that the international body has been discussing for over a decade: the lack of regulations on lethal autonomous weapons systems (LAWS), often referred to as "killer robots." This latest round of talks comes as wars rage in Ukraine and Gaza. While the meeting was held behind closed doors, U.N. Secretary-General António Guterres released a statement doubling down on his 2026 deadline for a legally binding solution to threats posed by LAWS. "Machines that have the power and discretion to take human lives without human control are politically unacceptable, morally repugnant and should be banned by international law," Guterres said in a statement. "We cannot delegate life-or-death decisions to machines," he later added. Former Trump Official Slams Un Reform Efforts As 'Eight And A Half Years Late' International Committee of the Red Cross (ICRC) President Mirjana Spoljaric delivered a statement to nations participating in Monday's meeting. Spoljaric expressed the ICRC's support for efforts to regulate LAWS but warned that technology is evolving faster than regulations, making threats posed by the systems "more worrying." Read On The Fox News App "Machines with the power and discretion to take lives without human involvement threaten to transform warfare in ways with grave humanitarian consequences. They also raise fundamental ethical and human rights concerns. All humanity will be affected," Spoljaric said. Nuclear Watchdog Urges 'Trust But Verify' That Iran Engages In Good-faith Negotiations Artificial intelligence is not necessarily a prerequisite for something to be considered an autonomous weapon, according to the U.N., as not all autonomous systems fully rely on AI. Some can use pre-programmed functions for certain tasks. However, AI "could further enable" autonomous weapons systems, the U.N. said. Vice President of the Conservative Partnership Institute Rachel Bovard, however, says that while regulation of autonomous weapons is necessary, the U.S. needs to be cautious when it comes to the development of international law. "AI is the wild west and every country is trying to determine the rules of the road. Some regulation will be imperative to preserving our humanity. When it comes to international law, however, the U.S. should proceed with caution," Bovard told Fox News Digital. "As we have learned with everything from trade to health, subjecting our national sovereignty to international dictates can have lasting unintended consequences. If existing international law is sufficient at the moment, that is what should govern." Countries in the Convention on Certain Conventional Weapons have been meeting since 2014 to discuss a possible full ban on LAWS that operate without human control and to regulate those with more human involvement, according to Reuters. In 2023, more than 160 nations backed a U.N. resolution calling on countries across the globe to address the risks posed by LAWS. However, there is currently no international law specifically regulating article source: UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow

UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow
UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow

Fox News

time14-05-2025

  • Politics
  • Fox News

UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow

Several nations met at the United Nations (U.N.) on Monday to revisit a topic that the international body has been discussing for over a decade: the lack of regulations on lethal autonomous weapons systems (LAWS), often referred to as "killer robots." This latest round of talks comes as wars rage in Ukraine and Gaza. While the meeting was held behind closed doors, U.N. Secretary-General António Guterres released a statement doubling down on his 2026 deadline for a legally binding solution to threats posed by LAWS. "Machines that have the power and discretion to take human lives without human control are politically unacceptable, morally repugnant and should be banned by international law," Guterres said in a statement. "We cannot delegate life-or-death decisions to machines," he later added. International Committee of the Red Cross (ICRC) President Mirjana Spoljaric delivered a statement to nations participating in Monday's meeting. Spoljaric expressed the ICRC's support for efforts to regulate LAWS but warned that technology is evolving faster than regulations, making threats posed by the systems "more worrying." "Machines with the power and discretion to take lives without human involvement threaten to transform warfare in ways with grave humanitarian consequences. They also raise fundamental ethical and human rights concerns. All humanity will be affected," Spoljaric said. Artificial intelligence is not necessarily a prerequisite for something to be considered an autonomous weapon, according to the U.N., as not all autonomous systems fully rely on AI. Some can use pre-programmed functions for certain tasks. However, AI "could further enable" autonomous weapons systems, the U.N. said. Vice President of the Conservative Partnership Institute Rachel Bovard, however, says that while regulation of autonomous weapons is necessary, the U.S. needs to be cautious when it comes to the development of international law. "AI is the wild west and every country is trying to determine the rules of the road. Some regulation will be imperative to preserving our humanity. When it comes to international law, however, the U.S. should proceed with caution," Bovard told Fox News Digital. "As we have learned with everything from trade to health, subjecting our national sovereignty to international dictates can have lasting unintended consequences. If existing international law is sufficient at the moment, that is what should govern." Countries in the Convention on Certain Conventional Weapons have been meeting since 2014 to discuss a possible full ban on LAWS that operate without human control and to regulate those with more human involvement, according to Reuters. In 2023, more than 160 nations backed a U.N. resolution calling on countries across the globe to address the risks posed by LAWS. However, there is currently no international law specifically regulating LAWS.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store