logo
Why we must keep humans at the heart of AI in warfare

Why we must keep humans at the heart of AI in warfare

Since 2016, discussions of the Convention on Certain Conventional Weapons Group of Governmental Experts on LAWS have been ongoing, but International Humanitarian Law (IHL) still lacks any specific, binding regulations relating to AI. As noted by International Committee of the Red Cross (ICRC) President Mirjana Spoljaric, AI in war is 'no longer an issue for tomorrow', but rather 'an urgent humanitarian priority today', requiring the immediate 'negotiation of new legally binding international rules'. Accordingly, United Nations Secretary General António Guterres recommended, in his 2023 New Agenda for Peace, that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026.
Read more
The ICRC has stressed that responsibility in warfare must remain with humans. 'Human control must be maintained,' it argues, and limits on autonomy urgently established 'to ensure compliance with international law and to satisfy ethical concerns'.
In 2022, the MoD itself echoed this sentiment. It stated that only human soldiers 'can make instinctive decisions on the ground in a conflict zone; improvise on rescue missions during natural disasters; or offer empathy and sympathy.' The then Defence Secretary Ben Wallace added that 'at its heart, our Army relies on the judgment of its own individuals.'
A recruitment campaign at the time carried the tagline: 'Technology will help us do incredible things. But nothing can do what a soldier can do.'
Colonel Nick Mackenzie, then Assistant Director for Recruitment, highlighted that, while 'technology is really, really important… there is always somebody, a person, behind that technology,' who is ultimately responsible for its use and the decisions it enables.
Since then, however, the use of AI-enabled rapid target identification systems in contemporary conflicts has grown rapidly, with notable examples being Lavender and Where's Daddy (Israel/Palestine), Saker and Wolly (Russia/Ukraine). A human being is generally still required in order to engage any lethal effects, but technological capabilities are already being developed to remove human input from the targeting process altogether.
Against this backdrop, the MoD's Strategic Defence Review 2025, released last month, calls for 'greater use of autonomy and Artificial Intelligence within the UK's conventional forces' to deliver 'greater accuracy, lethality, and cheaper capabilities'. 'As in Ukraine,' the Review continues, 'this would provide greater accuracy, lethality, and cheaper capabilities – changing the economics of defence.'
One example is Project ASGARD, which will help the Army locate and strike enemy targets at greater distances using AI as a 'force multiplier'. This is just one of over 400 AI-related projects being run by the MoD.
What remains unclear, but is critical from a legal and moral perspective, is what role human judgment will play in these projects and the military operations they support.
Computer scientist Pei Wang has said that while AI can behave like human intelligence in some ways, it is fundamentally different. AI shouldn't replace human intelligence, but rather support and enhance it – helping people make better-informed decisions.
Human-robot interaction specialist Karolina Zawieska warns of the need to distinguish between what is human and what is only human-like. AI systems often function as a 'black box', meaning it is not always clear how or why they produce certain outcomes. This creates serious problems for human understanding, control, and accountability.
When properly used, AI can support situational awareness and help human operators make better decisions. In this sense, it is a tool – not a decision-maker. But if too much control is handed over to AI, we risk removing human judgment and with it, moral responsibility.
Professor Jeff McMahan, moral philosopher at the Oxford Institute for Ethics, Law and Armed Conflict, has argued that it is essential for combatants to feel 'deep inhibitions about tackling non-combatants'.
However accurate or efficient AI may be, these inhibitions cannot be replicated by algorithms. As political scientist Valerie Morkevičius has pointed out, the emotional and moral 'messiness' of war is a feature, not a flaw because it slows down violence and prompts ethical reflection. Military decisions should be difficult. This is why human judgment must remain at the centre.
While defence and national security are reserved for Westminster, Scotland plays a key role in UK defence, from the bases at Faslane and Lossiemouth to the defence research carried out at Scottish universities. The issues raised in the Strategic Defence Review therefore carry particular relevance here.
UN Secretary General António Guterres has recommended that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026 (Image: Getty)
Scotland's approach to AI, shaped by the AI Strategy (2021) and the Scottish AI Playbook (2024), is notably human-centred. Informed by Organisation for Economic Cooperation and Development's (OECD) principles, both documents stress the importance of trustworthy, ethical, and inclusive AI that improves people's lives. They highlight the need for transparency, human control, and robust accountability.
Though not military in scope, these principles nevertheless offer a useful framework for a Scottish perspective on the development and use of AI for military purposes: keeping people at the centre, and ensuring that technology supports rather than replaces human agency.
The goal should not be the delegation of human decisions to machines, or the replacement of human beings with technology. Rather, AI should support and strengthen human decision-making – a tool for the enactment of human agency: a technological means for strictly human ends.
Dr Joanna LD Wilson is a Lecturer in Law at the University of the West of Scotland
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT 5 Launch Date, Face Stealing Apps & Baby Grok : AI News This Week
ChatGPT 5 Launch Date, Face Stealing Apps & Baby Grok : AI News This Week

Geeky Gadgets

time10 minutes ago

  • Geeky Gadgets

ChatGPT 5 Launch Date, Face Stealing Apps & Baby Grok : AI News This Week

What if the future of AI wasn't just about smarter tools but about reshaping how we live, learn, and create? This week, the AI world is buzzing with breakthroughs that could redefine our relationship with technology. From the much-anticipated ChatGPT-5 launch, rumored to set a new benchmark in conversational AI, to the rise of privacy-focused apps that challenge the status quo, these developments are more than just updates—they're signals of a rapidly evolving digital landscape. And then there's Baby Grok, Elon Musk's latest venture into child-friendly AI, sparking debates about the role of artificial intelligence in early education. Whether you're a tech enthusiast or a casual observer, these stories are impossible to ignore. Matt Wolfe explores the most pivotal AI advancements of the week, unpacking their potential to shape industries, challenge ethical norms, and transform everyday life. What makes ChatGPT 5 so eagerly awaited, and how might it change the way we work or communicate? Why are privacy-centric tools like Proton's Lumo chatbot gaining traction in a world increasingly wary of data misuse? And what does Baby Grok reveal about the future of AI in education? These are just a few of the questions we'll tackle as we delve into the innovations, controversies, and possibilities shaping the AI frontier. The answers might surprise you—or even make you rethink what's next for humanity and machines. Key AI Updates This Week ChatGPT 5: Rumors and Anticipation The buzz around ChatGPT-5 has intensified, with speculation pointing to an August release. OpenAI's upcoming Dev Day on October 6th has further fueled excitement, as it is expected to unveil new features and capabilities of the model. Currently, ChatGPT agents are accessible to Plus, Pro, and Team users, signaling a broader rollout of advanced functionalities. These updates could significantly enhance how you interact with AI, whether for professional tasks or personal projects. The potential improvements in natural language understanding and task automation are expected to redefine the role of AI in daily life. Google's Expanding AI Ecosystem Google continues to push boundaries in AI innovation, introducing tools aimed at enhancing productivity and creativity. Recent updates include: Opal: A platform that allows you to build and share AI-powered mini-apps, offering customizable workflows tailored to your needs. A platform that allows you to build and share AI-powered mini-apps, offering customizable workflows tailored to your needs. Web Guide: An experimental feature that organizes search results into categories, making information retrieval more intuitive and efficient. An experimental feature that organizes search results into categories, making information retrieval more intuitive and efficient. Google Photos Enhancements: AI-powered tools for converting photos into videos and creatively remixing images, expanding the possibilities of digital content creation. These advancements demonstrate Google's commitment to integrating AI into everyday applications, making digital interactions more seamless and user-friendly. AI NEWS this week – ChatGPT-5 Launch Date, Face Stealing Apps & Baby Grok Watch this video on YouTube. Here is a selection of other guides from our extensive library of content you may find of interest on AI tools AI-Powered Content Creation: Progress and Controversy The realm of AI-driven content creation has seen notable progress this week, with new tools and platforms emerging. Key developments include: Leonardo's V3 Fast Model: A tool designed for faster and more cost-effective video generation, catering to the growing demand for dynamic content. A tool designed for faster and more cost-effective video generation, catering to the growing demand for dynamic content. Pika Labs: An invite-only app allowing the creation of AI-animated social media content, offering a new dimension to digital storytelling. An invite-only app allowing the creation of AI-animated social media content, offering a new dimension to digital storytelling. Higsfield AI's 'Steel' Feature: A Chrome extension that recreates web images, raising ethical concerns about intellectual property rights and user consent. While these tools expand creative possibilities, they also spark debates about the ethical implications of AI in content generation, particularly regarding ownership and authenticity. Privacy-Centric AI Solutions As AI becomes increasingly integrated into daily life, privacy concerns are taking center stage. Proton's Lumo chatbot offers a refreshing alternative by prioritizing confidentiality. Unlike many AI tools that collect user data for training purposes, Lumo ensures that your conversations remain private. This shift toward privacy-focused AI reflects a growing demand for tools that respect user data while maintaining functionality. Such developments highlight the importance of balancing innovation with ethical considerations in an interconnected world. AI in Business and Society AI continues to reshape industries and societal norms, with several notable developments this week: Delta Airlines: The implementation of AI-driven dynamic pricing aims to optimize revenue but has drawn criticism for perceived unfairness in ticket pricing. The implementation of AI-driven dynamic pricing aims to optimize revenue but has drawn criticism for perceived unfairness in ticket pricing. Amazon: The acquisition of an AI-powered wearable capable of transcribing conversations underscores the growing role of AI in enhancing everyday convenience. The acquisition of an AI-powered wearable capable of transcribing conversations underscores the growing role of AI in enhancing everyday convenience. Meta: Aggressive recruitment of AI talent from competitors like Google and OpenAI highlights the intensifying competition in the tech sector. These advancements illustrate the fantastic impact of AI on business operations and societal interactions, though they also raise questions about fairness, ethics, and competition. AI in Education and Research AI is making significant contributions to education and research, showcasing its potential to address complex challenges and preserve cultural heritage. Highlights include: Math Olympiad Success: AI models developed by OpenAI and Google DeepMind achieved gold medal-level performance, demonstrating their advanced problem-solving capabilities. AI models developed by OpenAI and Google DeepMind achieved gold medal-level performance, demonstrating their advanced problem-solving capabilities. Ancient Inscriptions: DeepMind's model for restoring and contextualizing ancient texts offers a glimpse into how AI can support cultural preservation and academic research. These achievements underscore AI's ability to drive intellectual and cultural advancements, opening new avenues for exploration and discovery. US AI Policy: A Framework for Innovation The White House has introduced a 28-page AI policy aimed at fostering innovation, building infrastructure, and promoting ethical development. Key aspects of the policy include: Encouraging open source development to assist collaboration and transparency. Emphasizing ethical considerations to ensure responsible AI deployment. Positioning the United States as a global leader in AI technology through strategic investments and partnerships. This policy serves as a roadmap for balancing technological progress with societal values, making sure that AI development aligns with ethical principles and public interests. Windows 11: AI Features for Productivity Microsoft has introduced new AI-driven tools in Windows 11, designed to enhance productivity and streamline everyday tasks. A standout feature is: Copilot Vision: Interactive tutorials and enhanced search capabilities that provide a more intuitive user experience. These updates demonstrate how AI can be seamlessly integrated into operating systems, making routine activities more efficient and user-friendly. Miscellaneous AI Developments Other noteworthy updates in the AI space this week include: Baby Grok: Elon Musk's announcement of a kid-friendly AI app aimed at providing educational and interactive experiences for younger users. Elon Musk's announcement of a kid-friendly AI app aimed at providing educational and interactive experiences for younger users. Alibaba's Language Model: A new AI model that has achieved improved benchmarks, showcasing advancements in natural language processing. A new AI model that has achieved improved benchmarks, showcasing advancements in natural language processing. Pika Labs: A social AI video app that enables the creation of personalized animations, expanding the possibilities of AI-driven creativity. These developments highlight the diverse applications of AI, from education and entertainment to innovative research and innovation. Media Credit: Matt Wolfe Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Australian lender CBA to cut 45 jobs in AI shift, draws union backlash
Australian lender CBA to cut 45 jobs in AI shift, draws union backlash

Reuters

time5 hours ago

  • Reuters

Australian lender CBA to cut 45 jobs in AI shift, draws union backlash

July 29 (Reuters) - Commonwealth Bank of Australia ( opens new tab confirmed on Tuesday it is cutting 45 jobs as part of a shift toward using artificial intelligence to handle certain tasks, prompting a union to accuse the bank of excluding workers from the evolving economy. CBA, the country's biggest lender, said it is currently investing more than A$2 billion ($1.30 billion) in its operations, including frontline teams and technology services, due to which "some roles and work can change". Australia's Finance Sector Union (FSU) has accused CBA of axing frontline roles in favour of automation and offshoring. In a statement, the union claimed that a total of 90 roles were being eliminated, including 45 positions in the bank's direct banking system. According to the FSU, these jobs were cut following the introduction of a new voice bot system on the bank's inbound customer enquiries line in June. "We're also proactively creating new roles to support career growth and help our people transition into future-fit opportunities," CBA said. The bank said it is consulting on the affected roles and looking at other internal jobs and reskilling opportunities for its people, while it denied offshoring jobs as per the FSU's claim. ($1 = 1.5328 Australian dollars)

Cheyenne to host massive AI data center using more electricity than all Wyoming homes combined
Cheyenne to host massive AI data center using more electricity than all Wyoming homes combined

The Independent

time8 hours ago

  • The Independent

Cheyenne to host massive AI data center using more electricity than all Wyoming homes combined

An artificial intelligence data center that would use more electricity than every home in Wyoming combined before expanding to as much as five times that size will be built soon near Cheyenne, according to the city's mayor. 'It's a game changer. It's huge,' Mayor Patrick Collins said Monday. With cool weather — good for keeping computer temperatures down — and an abundance of inexpensive electricity from a top energy-producing state, Wyoming's capital has become a hub of computing power. The city has been home to Microsoft data centers since 2012. An $800 million data center announced last year by Facebook parent company Meta Platforms is nearing completion, Collins said. The latest data center, a joint effort between regional energy infrastructure company Tallgrass and AI data center developer Crusoe, would begin at 1.8 gigawatts of electricity and be scalable to 10 gigawatts, according to a joint company statement. A gigawatt can power as many as 1 million homes. But that's more homes than Wyoming has people. The least populated state, Wyoming, has about 590,000 people. And it's a major exporter of energy. A top producer of coal, oil and gas, Wyoming ranks behind only Texas, New Mexico and Pennsylvania as a top net energy-producing state, according to the U.S. Energy Information Administration. Accounting for fossil fuels, Wyoming produces about 12 times more energy than it consumes. The state exports almost three-fifths of the electricity it produces, according to the EIA. But this proposed data center is so big, it would have its own dedicated energy from gas generation and renewable sources, according to Collins and company officials. Gov. Mark Gordon praised the project's value to the state's gas industry. 'This is exciting news for Wyoming and for Wyoming natural gas producers," Gordon said in the statement. While data centers are energy-hungry, experts say companies can help reduce their effect on the climate by powering them with renewable energy rather than fossil fuels. Even so, electricity customers might see their bills increase as utilities plan for massive data projects on the grid. The data center would be built several miles (kilometers) south of Cheyenne off U.S. 85 near the Colorado state line. State and local regulators would need to sign off on the project, but Collins was optimistic construction could begin soon. "I believe their plans are to go sooner rather than later,' Collins said. OpenAI, the developer of Chat GPT, has been scouring the U.S. for sites for a massive AI data center effort called Stargate, but a Crusoe spokesperson declined to say if the Cheyenne project was one. 'We are not at a stage that we are ready to announce our tenant there,' said the spokesperson, Andrew Schmitt. 'I can't confirm or deny that is going to be one of the stargate." Recently, OpenAI announced it had switched on the first phase of a Crusoe-built data center complex in Abilene, Texas, in a partnership with software giant Oracle. 'To the best of our knowledge, it is the largest data center — we think of it as a campus — in the world,' OpenAI's chief global affairs officer Chris Lehane told The Associated Press last week. 'It generates, roughly and depending how you count, about a gigawatt of energy.' OpenAI has also been looking elsewhere in the U.S. to expand its data centers. It said last week that it has entered into an agreement with Oracle to develop another 4.5 gigawatts of data center capacity. 'We're now in a position where we have, in a really concrete way, identified over five gigawatts of energy that we're going to be able to build around,' Lehane said. OpenAI hasn't named any locations, besides its flagship site in Texas, where it plans to build data centers. As of earlier this year, Wyoming was not one of the 16 states where OpenAI said it was looking for locations to build new data centers. ___ O'Brien reported from Austin, Texas.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store