logo
AI warfare is here: How intelligent drones Harop and Heron fronted India's Operation Sindoor

AI warfare is here: How intelligent drones Harop and Heron fronted India's Operation Sindoor

Economic Times17-05-2025

Agencies Representative image
As we enter the age of artificial intelligence (AI), even wars seem to be becoming AI-first. India's Operation Sindoor onslaught was fronted by intelligent drones, with high-tech Harops and Herons having the ability to loiter, manoeuvre and choose their targets intelligently. Ukraine has managed to stay in the game against powerful conventional Russian forces through jerryrigged autonomous and AI-guided drones, with small, first-personview (FPV) attack drones, guided by algorithms, destroying more Russian armour than any other weapon category. Meanwhile, in Gaza, the Israelis have used advanced algorithms, code-named The Gospel and Lavender, to sift intelligence and suggest targets in real time. In 2020, a Turkish-made Kargu-2 attack drone may have autonomously hunted down fighters in Libya without human orders— possibly the first lethal strike by a truly autonomous weapon.
In our imagination, AI warfare is about ar mies of futuristic Terminator robots marching in tandem as they go to war; in reality, the age of AI warfare has already begun. As with everything with war and AI, this kind of warfare using lethal autonomous weapons (LAWs) poses disconcerting questions. LAWs are machines that can identify, select and kill targets without human intervention. Unlike nuclear weapons, these systems are relatively cheap, scalable and hard to control once unleashed. The level of human control can vary, from 'human-in-the-loop' systems requiring authorisation for engagement , to 'human-on-the-loop' where a human can override autonomous actions, and finally 'humanout-of-the-loop' systems operating without any human involvement post-activation. This possibility of a new kind of war, where a machine makes lifeand-death decisions, has spurred further calls at the UN to ban such weapons.
There are fears among ethicists and human rights bodies of accidental escalation, loss of accountability, or full-scale 'drone wars' with no human restraint. Clearly, nations are not on the same page, as furious development continues among major powers that see military gains in letting AI take the reins. AI warfare has gone beyond tactical advantages to established policy, with the Chinese military doctrine explicitly mentioning 'intelligentised warfare' as its future. While the notion of LAWs and AI warfare is horrific, this article deliberately steps beyond the familiar 'ban or regulate' discourse to explore a few contrarian and counterintuitive views that argue AI could perhaps make war more humane. One counterintuitive argument is that outsourcing war to machines could save human lives. If robots can shoulder the most dangerous tasks, human soldiers stay out of harm's way. Maybe it is better to send a disposable machine into a kill-zone to fight another machine, than a young soldier trying to kill another? Recent conflicts hint at this lifesaving potential: Azerbaijan's victory over Armenia in the 2020 Nagorno-Karabakh war, for example, was achieved largely through superior drones, greatly reducing its own casualties. This could potentially usher in an era of 'boutique wars' or persistent, low-intensity conflicts waged primarily by AI systems, flying below the threshold that typically triggers major international intervention. This sounds tempting but has the downside of making war 'risk-free' for the side that has more of these killer machines, making leaders grow more willing to launch military adventures. A second contrarian idea is that AI might make warfare more ethical by improving precision. Most militaries already try to minimise collateral damage, as India has been trying to do in Operation Sindoor. AI tools could make 'surgical' strikes even sharper. Human soldiers, despite their valour, are prone to error, fatigue and emotion. AI systems, theoretically, can be trained to avoid civilian zones, assess threats more accurately and stop operations when rules of engagement are violated. Theoretically, an autonomous AI system can be programmed to never fire at a school or a hospital, and it will emotionlessly obey this every single time. Imagine an AI drone that aborts a strike mid-flight because an ambulance enters the frame, something a human pilot might miss in the fog of war. Even the Red Cross has acknowledged that AI-enabled decision support systems 'may enable better decisions by humans… minimising risks for civilians'. The notion of a 'clean war' enabled by AI precision can be a doubleedged sword. The same Israeli AI system that identified militants in Gaza also churned out algorithmic killlists with minimal human review. If flawed data or biased algorithms mislabel a civilian as a threat, an AI could kill innocents with ruthless efficiency. AI can enhance compliance with the laws of war, but it cannot substitute for human judgment. Operation Sindoor has highlighted the danger of misinformation and deepfakes being peddled by mainstream media. AI could change this. Autonomous systems log everything—location data, video footage, target decisions—opening up the possibility of 'algorithmic accountability', with every strike audited, and every action justified, or condemned. Perhaps the most novel contrarian view is expressed in a recent paper 'Superintelligence Strategy: Expert Version' by Eric Schmidt and others, where they borrowed from the Cold War nuclear deterrent of MAD or Mutually Assured Destruction, to propose the concept of MAIM or Mutual Assured AI Malfunction. The idea is that as AI becomes core to military systems, nations may hesitate to strike each other, because attacking one AI system could cause unpredictable ripple effects across both sides. The inherent vulnerability of complex AI systems to sabotage—through cyberattacks, degradation of training data, or even kinetic strikes on critical infrastructure like data centres—creates a de facto state of mutual restraint among AI superpowers. MAIM flips the script on dystopia: instead of AI dooming us, the mutual fear of runaway AI could keep rival powers' aggressive instincts in check. It does seem surreal to discuss how AI could actually make war more humane, if there is such a thing, rather than making it even more horrific than ever. The contrarian perspectives above challenge our instincts, and many would recoil at the idea of killer robots marching in. However, with so much of it becoming reality, we can no longer avoid these questions.
We can choose to look at this with horrific pessimism or take a glass half-full approach that technology guided by human values might make future wars less inhuman. Everything, they say, is fair in love and war, and that everything might soon include artificial intelligence.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Pakistan Urges U.S. To Mediate With India As Simla Agreement Declared ‘Dead'
Pakistan Urges U.S. To Mediate With India As Simla Agreement Declared ‘Dead'

Time of India

timean hour ago

  • Time of India

Pakistan Urges U.S. To Mediate With India As Simla Agreement Declared ‘Dead'

/ Jun 06, 2025, 12:46PM IST Pakistan's Prime Minister Shehbaz Sharif has urged U.S. mediation with India, calling for comprehensive dialogue on long-standing issues. He praised Donald Trump for defusing tensions after the Pahalgam terror attack. Pakistan's Defence Minister Khawaja Asif added fuel to the fire, declaring the Simla Agreement void and reverting to a UN position on Kashmir. Watch

Elon Musk says SpaceX won't decommission Dragon capsule despite Trump threat: What happens if he changes his mind?
Elon Musk says SpaceX won't decommission Dragon capsule despite Trump threat: What happens if he changes his mind?

Mint

timean hour ago

  • Mint

Elon Musk says SpaceX won't decommission Dragon capsule despite Trump threat: What happens if he changes his mind?

Elon Musk has signalled a scaling back of hostilities with US President Donald Trump, stating that SpaceX will not be decommissioning the Dragon spacecraft. Musk had previously threatened to decommission the capsule in response to Trump's threat to cancel all US government contracts with Musk's companies. You may be interested in 'In light of the President's statement about cancellation of my government contracts, @SpaceX will begin decommissioning its Dragon spacecraft immediately' Musk had wrote quoting a post by Trump. However, the billionaire did seem in conciliatory mood after some advice from one of followers, writing, 'This is a shame this back and forth. You are both better than this. Cool off and take a step back for a couple days.' To this reply, Musk responded saying 'Good advice' and the Dragon spacecraft wouldn't be decomissioned for now. Notably, SpaceX is at the risk of losing $22 billion worth of US government contracts if Trump does follow up on his threats. Meanwhile, US would have no other option but to rely on Russia for getting its crews to the space station if Musk decommisions the Dragon spacecraft. Dragon spacecraft is a capsule developed by SpaceX with the help of government agencies, and it plays an important role in operating the space station, according to the Associated Press. NASA is reportedly heavily reliant on SpaceX for other programmes, such as launching scientific missions and returning astronauts from the Moon's surface. Currently, SpaceX is the only US company capable of transporting crews to and from the space station using its four-person Dragon capsules. Although Boeing's Starliner capsules have flown astronauts once before, last year's test flight went so badly that two NASA astronauts had to take a SpaceX ride home in March. NASA's other option is to rely on Russia's Soyuz capsules, which are currently the only alternative for transporting crews to the space station. The Soyuz capsules reportedly have a capacity of three people per launch, with each launch carrying two Russian and one NASA astronaut. Meanwhile, each SpaceX launch carries one Russian astronaut as part of a barter system.

NDTV Exclusive: How Rs 5 Indian Biscuit Is Being Sold For Rs 2,400 In Gaza
NDTV Exclusive: How Rs 5 Indian Biscuit Is Being Sold For Rs 2,400 In Gaza

NDTV

timean hour ago

  • NDTV

NDTV Exclusive: How Rs 5 Indian Biscuit Is Being Sold For Rs 2,400 In Gaza

New Delhi: Parle-G biscuits, a staple in Indian households, associated with childhood, tea breaks, and low-cost nutrition, were never intended to be a luxury. But in wartorn Gaza, where food scarcity has turned into acute famine, they are being sold at nearly 500 times their original price. In a recent viral post from Gaza, a man claimed that Parle G biscuits, manufactured by Mumbai-headquartered Parle Products, are being sold for over 24 euros (Rs 2,342). Many on social media were baffled by the cost of the biscuits that have consistently been among the cheapest foods in the Indian market. "After a long wait, I finally got Ravif her favorite biscuits today. Even though the price jumped from 1.5 euros to over 24 euros, I just couldn't deny Rafif her favorite treat," the viral post read. After a long wait, I finally got Ravif her favorite biscuits today. Even though the price jumped from €1.5 to over €24, I just couldn't deny Rafif her favorite treat. — Mohammed jawad 🇵🇸 (@Mo7ammed_jawad6) June 1, 2025 A Manufactured Famine Following the October 2023 escalation and Israel's military campaign that began shortly thereafter, Gaza's access to food has been systematically reduced. Between March 2 and May 19 this year, the besieged Palestinian enclave faced a near-total blockade. Only a limited number of humanitarian trucks were allowed through, most of them after intense international pressure. Israel, which accuses Hamas, the political and militant group within Gaza, of seizing and weaponising aid, had suspended traditional UN food deliveries. Instead, a controversial and heavily criticised alternative was introduced on May 27 - the Secure Distribution Site 1 (SDS1) model, developed by the Gaza Humanitarian Foundation (GHF), which, according to the French daily Le Monde, is a joint initiative backed by the US, Switzerland, and Israel. The SDS1, located in Rafah, features caged corridors forcing Palestinians into narrow queues, guarded perimeters manned by Safe Reach Solutions, a US-based private security firm, which has been accused of carrying out intelligence operations in Gaza using Israeli data. But how much aid reaches the truly hungry? How many aid boxes are being sold on the black market for inflated prices? The Black Market Reality The steep pricing is not limited to Parle-G, which is an export from a country located roughly 4,300 km away. "The problem isn't with the original suppliers or taxation," Dr. Khaled Alshawwa, a 31-year-old surgeon based in Gaza City, told NDTV. "These goods usually enter Gaza as humanitarian aid, free of charge. But only a minority receives them. Scarcity turns them into high-priced black market goods." Mr Alshawwa, managed to get his hands on a packet of Parle-G biscuits, which he said cost him roughly Rs 240. Different locations, different prices depending on who the seller is. "The closure of borders for more than three months now has allowed only a scarce amount of very basic needs that don't meet the needs of 2 million people. So when some people are able to get some, or when looting happens, these foods are being sold at very high, unaffordable prices." Dr Alshawwa told NDTV. Parle-G, it appears, likely arrived through aid shipments, eventually landing in the hands of a few vendors who sold it at prices unreachable to most Gazans. NDTV has reached out to the company for a statement. A rough breakdown of current market prices (in INR) of some important products from northern Gaza as of June 6, 2025: 1 kg sugar: Rs 4,914 1 litre cooking oil: Rs 4,177 1 kg potatoes: Rs 1,965 1 kg onions: Rs 4,423 1 coffee cup: Rs 1,800 A list sourced by NDTV from Gaza shows basic commodities and groceries being sold at exorbitant prices. The prices are mentioned in the new Israeli shekel. the local currency. One Israeli shekel translates to 24.57 Indian rupees Why Parle-G Matters Parle-G is more than food. It is nostalgia wrapped in paper. Launched in 1938, the biscuit emerged during India's Swadeshi movement as a local alternative to elite British snacks. It became a national equaliser, a biscuit anyone could afford. Over the decades, Parle-G has managed to retain its low price tag thanks to 'shrinkflation' economics - reducing weight while maintaining price. A Rs 5 packet that once held 100 grams now contains about 55 grams. Still, it remains among the cheapest packaged food products in India. In 2013, Parle-G became the first Indian FMCG brand to cross Rs 5,000 crore in sales. By 2011, it was the world's largest selling biscuit by volume, according to Nielsen.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store