logo
#

Latest news with #SparkCognition

Aditya Infotech IPO: Dixon was allotted shares at  ₹340 apiece. Should you pay double?
Aditya Infotech IPO: Dixon was allotted shares at  ₹340 apiece. Should you pay double?

Mint

time28-07-2025

  • Business
  • Mint

Aditya Infotech IPO: Dixon was allotted shares at ₹340 apiece. Should you pay double?

Aditya Infotech, India's largest distributor of video surveillance products, is set to raise ₹1,300 crore through its upcoming initial public offering (IPO). The IPO comprises a fresh issue of ₹500 crore and an offer-for-sale of ₹800 crore by the promoters. Following the listing, the promoters' stake will decline to 77% from 93%. Dixon Technologies, an existing investor, will continue to hold a 7% stake, while public shareholders will own the remaining 24%. Aditya Infotech is well known for its in-house CP Plus brand, a well-established Indian brand in the security and surveillance industry. It also serves as the exclusive distributor for Dahua Technology, a global leader in video surveillance systems. The IPO opens on 29 July, with a price band of ₹640– ₹675 per share, valuing the company at ₹7,912 crores at the upper end of the price band. At the heart of the issue lies a move to reduce leverage. Of the fresh issue, ₹375 crore is earmarked for debt repayment, with the rest to be used for general corporate purposes. While the debt reduction improves the optics on financial discipline, the real story may lie elsewhere. In India's still-underrated electronic security and surveillance market, what sets Aditya Infotech apart, and what should investors know as the company makes its public debut? How dominant is CP Plus in Aditya Infotech's portfolio? Aditya Infotech operates in a highly organized surveillance market, where 90% of the industry share is held by formal players. The company leads the pack, with a 21% revenue market share in FY25. It manufactures and sells its CP Plus brand, and exclusively distributes products from China's Dahua Technology. It generates revenue primarily from one-time sales of each product unit. Together, CP Plus and Dahua accounted for 94% of the company's ₹3,112 crore revenue in FY25, with CP Plus contributing 69%, and Dahua (25%). The higher share of CP Plus helps reduce reliance on third-party brands. Aditya Infotech has also begun offering services and solutions to build recurring revenue streams. For instance, CP Plus AI solution, powered by SparkCognition, offers video analytics capabilities on a subscription basis. How diversified is Aditya Infotech's product portfolio? Its product range spans commercial (thermal solutions, interactive displays), consumer (dash cams, Wi-Fi doorbells) and surveillance products, including digital video recorders (DVRs) and Power over Ethernet (PoE). However, its designs and products are not patented, making it vulnerable to imitation and potential loss of competitive edge. Its products are used across banking, healthcare, education, law enforcement, hospitality, smart traffic, industrial and retail sectors. Surveillance products such as CCTV cameras made up 79% of revenue, while the rest 21% came from accessories like routers, cables, and monitors. Is Aditya Infotech's rising inventory a concern? With an annual manufacturing capacity of 17.2 million units, Aditya is the world's third-largest manufacturer of surveillance products. Capacity utilization stood strong at 77%, reflecting steady demand. However, the utilization has increased slightly from 73% in FY23. At the same time, inventory levels in FY25 rose 71% to ₹871 crores compared to FY24, indicating either weaker-than-expected demand or a buildup in anticipation of stronger demand ahead. If demand is weak, high inventory may strain working capital and hurt margins. If strategic, it could aid faster deliveries, but it risks overestimating demand. How exposed is Aditya Infotech to supplier concentration risks? Further, supplier concentration risk remains high, with AIL Dixon (a joint venture between Dixon Technologies and Aditya Infotech) supplying 52% of materials consumed, 85% of which are sourced from outside India. Also, the top five suppliers supply 92% of the raw materials. This exposes it to both supplier concentration and geographic risk. By contrast, the company's localization strategy offers some comfort. About 85% of the CP Plus product line is manufactured in India, with 96% of materials sourced from India. In addition, only 4% of its components are imported, significantly lower than AIL Dixon. How balanced is Aditya Infotech's regional exposure? The company's distribution network is one of its biggest strengths. It sells through over 1,000 distributors and runs 69 exclusive CP Plus Galaxy stores across 550 cities and towns. A dominant 80% of sales come via distributors, while online marketplaces contribute only 2.5%. The business is primarily concentrated in India, with 99.68% of its revenue derived from domestic sales. Its domestic revenue is fairly diversified—North India leads with 39%, followed by the West (26%), South (20%), and East (15%). This reduces reliance on any single region. Is low client concentration a strength or a risk? Client concentration remains low. The top client contributed just 4% of FY25 revenue. The top five contributed 12%, and the top 10 made up 19%—down from 21% in FY23. Most customer relationships are long term, with the top 10 customers having a relationship of over 6 years, including Bright Computers, IR Focus CCTV, Kiran Electro Systems and Wasp Infotech. However, it does not enter into long-term contracts with customers nor does it have any specific agreements. This means that Aditya's order volumes are subject to changes in customer demand, pricing terms or competition. As a result, the company must continuously invest in customer acquisition and channel relationships to maintain its sales momentum. To this end, it is focusing on a service-based model and enterprise customers, to secure and grow customer relationships with large corporates. Could rising warranty claims dent Aditya Infotech's margins? While the company hasn't specified the exact duration of its warranty, it remains liable for claims arising from faulty or defective CP Plus products. In contrast, warranties and post-sales services for Dahua products are handled directly by Dahua. Warranty claims have also increased in line with the rising sales volumes. Product service and warranty expenses grew from ₹9.5 crore (0.42% of revenue) in FY23 to ₹14.8 crore (0.48%) in FY25. While the numbers are not large, a rise in claims could still impact margins and profitability. Also, Aditya Infotech does not have an insurance policy to cover warranty expenses, so any payout exceeding its reserves could affect its financial position. What's holding back Aditya Infotech's net profitability? Aditya Infotech has posted strong revenue growth, with topline rising 36% from ₹2,296 crores in FY23 to ₹3,112 crore in FY25, led by higher volumes across product categories. Profitability also improved at the operating level. Ebitda rose 43% to ₹258 crores, up from ₹181 crores in FY23, as margins expanded by 38 basis points to 8.27%. Even though Aditya Infotech has no listed peer, its margins are modest compared to the 20.7% margin of unlisted competitor Prama Hikvision. At the same time, smaller player Samriddhi Automations operates at a lower 5.4% margin. This places Aditya Infotech somewhere in the middle of the segment. However, the picture looks weaker at the profit after tax (PAT) level. Despite topline growth and better operating performance, adjusted PAT (before exceptional items) fell 10% to ₹103 crores in FY25, from ₹114 crores in FY23, as PAT margins contracted by 168 basis points to 3.3%. This decline was driven by a sharp increase in expenses. Employee benefit expenses doubled to ₹203 crores, while other expenses surged 80%, and depreciation expenses rose nearly four-fold. We have used PAT before exceptional items throughout to exclude the impact of gain of ₹249 crore arising from fair valuation in FY25. The return ratios remained moderate too. Return on equity stands at 10%, while return on capital employed is at 16%. The company also carries some debt on its books. As of May 31, 2025, outstanding debt stood at ₹423 crores, with ₹42 crores paid in finance costs during FY25. However, ₹375 crores from the IPO proceeds will go towards debt repayment, providing a near-term boost to the bottom line in FY26. Is the surveillance boom already baked into Aditya's valuation? Aditya Infotech's numbers are reasonable but not high enough to justify the premium the company is seeking. The IPO implies a price-to-earnings multiple of 77 times, which looks rich given its average return ratios, muted profitability. Notably, the IPO pricing is almost double that of ₹340 per share, the price at which they were allotted to Dixon in September. Even so, Aditya Infotech is well-placed to benefit from the rapid growth in India's video surveillance sector. The sector is expected to grow at 16.5% annually, from ₹106 billion in FY25 to ₹227 billion in FY30. The number of units sold is also expected to double from 40 million to 75 million by that time. But the valuation leaves little margin of safety, even if the growth story holds. For more such analysis, read Profit Pulse. About the author: Madhvendra has over seven years of experience in equity markets and has cleared the NISM-Series-XV: Research Analyst Certification Examination. He specialises in writing detailed research articles on listed Indian companies, sectoral trends, and macroeconomic developments. Disclosure: The writer does not hold the stocks discussed in this article. The purpose of this article is only to share interesting charts, data points, and thought-provoking opinions. It is NOT a recommendation. If you wish to consider an investment, you are strongly advised to consult your advisor. This article is strictly for educational purposes only.

AI is the future of war
AI is the future of war

AllAfrica

time17-05-2025

  • Science
  • AllAfrica

AI is the future of war

In the 1983 film War Games, a supercomputer known as WOPR (for War Operation Plan Response) is about to provoke a nuclear war between the United States and the Soviet Union, but because of the ingenuity of a teenager (played by Matthew Broderick), catastrophe is averted. In the first Terminator film, which was released a year later, a supercomputer called 'Skynet' decides to exterminate humanity because it's perceived as a threat to its existence rather than to protect American nuclear weapons. Although these films offered audiences grim scenarios of intelligent machines running amok, they were also prophetic. Artificial intelligence (AI) is so commonplace that it's routinely applied during a simple Google search. That it is also being integrated into military strategies is hardly any surprise. It's just that we have little understanding of the capacity of these high-tech weapons (those that are now ready for use and those in development). Nor are we prepared for systems that have the capacity to transform warfare forever. Throughout history, it is human intelligence that uses the technology, not the technology itself, which has won or lost wars. That may change in the future when human intelligence is focused instead on creating systems that are more capable on the battlefield than those of the adversary. Artificial intelligence isn't a technology that can be easily detected, monitored, or banned, as Amir Husain, the founder and CEO of an AI company, SparkCognition, pointed out in an essay for Media News. Integrating AI elements—visual recognition, language analysis, simulation-based prediction, and advanced forms of search—with existing technologies and platforms 'can rapidly yield entirely new and unforeseen capabilities.' The result 'can create exponential, insurmountable surprise,' Hussain writes. Advanced technology in warfare is already widespread. The use of uncrewed aerial vehicles (UAVs)—commonly known as drones—in military settings has set off warnings about 'killer robots.' What happens when drones are no longer controlled by humans and can execute military missions on their own? These drones aren't limited to the air; they can operate on the ground or underwater as well. The introduction of AI, effectively giving these weapons the capacity for autonomy, isn't far off. Moreover, they're cheap to produce and cheap to purchase. The Russians are buying drones from Iran for use in their war in Ukraine, and the Ukrainians have been putting together a cottage industry constructing drones of their own against the Russians. The relative ease with which a commercial drone can be converted into one with a military application also blurs the line between commercial and military enterprises. At this point, though, humans are still in charge. A similar problem can be seen in information-gathering systems that have dual uses, including satellites, manned and unmanned aircraft, ground and undersea radars, and sensors, all of which have both commercial and military applications. AI can process vast amounts of data from all these systems and then discern meaningful patterns, identifying changes that humans might never notice. American forces were stymied to some degree in wars in Iraq and Afghanistan because they could not process large amounts of data. Even now, remotely piloted UAVs are using AI for autonomous takeoff, landing, and routine flight. All that's left for human operators to do is concentrate on tactical decisions, such as selecting attack targets and executing attacks. AI also allows these systems to operate rapidly, determining actions at speeds that are seldom possible if humans are part of the decision-making process. Until now, decision-making speed has been the most important aspect of warfare. I f, however, AI systems go head-to-head against humans, AI will invariably come out ahead. However, the possibility that AI systems eliminate the human factor terrifies people who don't want to see an apocalyptic scenario on celluloid come to pass in reality. A distinction needs to be made between the term 'autonomous' and the term 'automated.' If we are controlling the drone, then the drone is automated. But if the drone is programmed to act on its own initiative, we would say it is autonomous. But does the autonomous weapon describe the actual weapon—i.e., a missile on a drone—or the drone itself? Take, for example, the Global Hawk military UAV (drone). It is automated insofar as it is controlled by an operator on the ground, and yet if it loses communication with the ground, the Golden Hawk can land on its own. Does that make it automated or autonomous? Or is it both? The most important question is whether the system is safety-critical. Translated, that means whether it has the decision-making capacity to use a weapon against a target without intervention from its human operator. It is possible, for example, for a drone to strike a static military target on its own (such as an enemy military base) but not a human target because of the fear that innocent civilians could be injured or killed as collateral damage. Many countries have already developed drones with real-time imagery capable of acting autonomously in the former instance, but not when it comes to human targets. Drones aren't the only weapons that can act autonomously. Military systems are being developed by the US, China, and several countries in Europe that can act autonomously in the air, on the ground, in water, and underwater with varying degrees of success. Several types of autonomous helicopters designed so that a soldier can direct them in the field with a smartphone are in development in the US, Europe, and China. Autonomous ground vehicles, such as tanks and transport vehicles, and autonomous underwater vehicles are also in development. In almost all cases, however, the agencies developing these technologies are struggling to make the leap from development to operational implementation. There are many reasons for the lack of success in bringing these technologies to maturity, including cost and unforeseen technical issues, but equally problematic are organizational and cultural barriers. The U.S. has, for instance, struggled to bring autonomous UAVs to operational status, primarily due to organizational infighting and prioritization in favor of manned aircraft. In the battleground of the future, elite soldiers may rely on a head-up display that feeds them a wealth of information that is collected and routed through supercomputers carried in their backpacks using an AI engine. With AI, the data is instantly analyzed, streamlined, and fed back into the head-up display. This is one of many potential scenarios presented by U.S. Defense Department officials. The Pentagon has embraced a relatively simple concept: the 'hyper-enabled operator.' The objective of this concept is to give Special Forces 'cognitive overmatch' on the battlefield, or 'the ability to dominate the situation by making informed decisions faster than the opponent.' In other words, they will be able to make decisions based on the information they are receiving more rapidly than their enemy. T he decision-making model for the military is called the 'OODA loop' for 'observe, orient, decide, act.' That will come about using computers that register all relevant data and distill them into actionable information through a simple interface like a head-up display. This display will also offer a 'visual environment translation' system designed to convert foreign language inputs into clear English in real time. Known as VITA, the system encompasses both a visual environment translation effort and voice-to-voice translation capabilities. The translation engine will allow the operator to 'engage in effective conversations where it was previously impossible.' VITA, which stands for Versatile Intelligent Translation Assistant, offers users language capabilities in Russian, Ukrainian, and Chinese, including Mandarin, a Chinese dialect. Operators could use their smartphones to scan a street in a foreign country, for example, and immediately obtain a translation of street signs in real-time. Military experts divide adversarial attacks into four categories: evasion, inference, poisoning, and extraction. These types of attacks are easily accomplished and often don't require computing skills. An enemy engaged in evasive attacks could attempt to deceive an AI weapon to avoid detection—hiding a cyberattack, for example, or convincing a sensor that a tank is a school bus. This may require the development of a new type of AI camouflage, such as strategic tape placement, that can fool AI. Inference attacks occur when an adversary acquires information about an AI system that allows evasive techniques. Poisoning attacks target AI systems during training, interfering with access to the datasets used to train military tools—mislabeling images of vehicles to dupe targeting systems, for instance, or manipulating maintenance data designed to classify imminent system failure as a regular operation. Extraction attacks exploit access to the AI's interface to learn enough about the AI's operation to create a parallel model of the system. If AI systems are not secure from unauthorized users, then an adversary's users could predict decisions made by those systems and use those predictions to their advantage. For instance, they could predict how an AI-controlled unmanned system will respond to specific visual and electromagnetic stimuli and then proceed to alter its route and behavior. Deceptive attacks have become increasingly common, as illustrated by cases involving image classification algorithms that are deceived into perceiving images that aren't there, confusing the meaning of images, and mistaking a turtle for a rifle, for instance. Similarly, autonomous vehicles could be forced to swerve into the wrong lane or speed through a stop sign. In 2019, China announced a new military strategy, Intelligentized Warfare, which utilizes AI. Officials of the Chinese People's Liberation Army have stated that their forces can overtake the U.S. military by using AI. One of its intentions is to use this high-tech type of warfare to bring Taiwan under its control without waging conventional warfare. However, only a few of the many Chinese studies on intelligentized warfare have focused on replacing guns with AI. On the other hand, Chinese strategists have made no secret of their intention to control the enemy's will directly. That would include the US president, members of Congress, combatant commanders, and citizens. 'Intelligence dominance'—also known as cognitive warfare or 'control of the brain'—is seen as the new battleground in intelligentized warfare, putting AI to a very different use than most American and allied discussions have envisioned. According to the Pentagon's 2022 report on Chinese military developments, the People's Liberation Army is being trained and equipped to use AI-enabled sensors and computer networks to 'rapidly identify key vulnerabilities in the US operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.' Controlling an adversary's mind can affect not just someone's perceptions of their surroundings but, ultimately, their decisions. For the People's Liberation Army, cognitive warfare is equal to the other domains of conflict, which are air, land, and sea. In that respect, social media is considered a key battlefield. Russia has also been developing its own AI capacity. As early as 2014, the Russians inaugurated a National Defense Control Center in Moscow, a centralized command post for assessing and responding to global threats. The center was designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses. Russia has declared that it will eventually develop an AI system capable of running the world. Russians are already using AI in Ukraine to jam wireless signals connecting Ukrainian drones to the satellites they rely on for navigation, causing the machines to lose their way and plummet to Earth. The Russian Ministry of Defense (MOD) has explored ways in which AI systems can be developed for uncrewed systems for the air, maritime, and ground domains. At the same time, at least in the short term, official policy is predicated on the belief that humans must remain firmly in the loop. Meanwhile, the Russians are trying to improve UAV capabilities with AI as a mechanism for command, control, and communications. MOD also emphasizes the use of AI for data collection and analysis as a natural evolution from the current 'digital' combat technology and systems development. The use of AI on the battlefield by US intelligence, while brief, showed promising results. 'Raven Sentry,' an AI tool launched in 2019 by a team of American intelligence officers (known as the 'nerd locker'), with help from Silicon Valley expertise, was intended to forecast insurgent attacks. The initial use of AI came at a time when US bases were closing, troop numbers were falling, and intelligence resources were being diverted. Raven Sentry relied on open-source data. 'We noticed an opportunity presented by the increased number of commercial satellites and the availability of news reports on the Internet, the proliferation of social media postings, and messaging apps with massive membership,' says Colonel Thomas Spahr, chief of staff of the Resolute Support J2 intelligence mission in Kabul, Afghanistan, from July 2019 to July 2020. The AI tool also drew on historical patterns based on insurgent activities in Afghanistan going back 40 years, which encompassed the Soviet occupation of the country in the 1980s. Environmental factors were also considered. 'Historically, insurgents attack on certain days of the year or holidays, for example, or during certain weather and illumination conditions,' Spahr notes. He adds, 'The beauty of the AI is that it continues to update that template. The machine would learn as it absorbed more data.' Before its demise in 2021 (with the US withdrawal from Afghanistan), Raven Sentry had demonstrated its feasibility, predicting an insurgent attack with 70% accuracy. The AI tool predicted that attacks were more likely to occur when the temperature was above 4 degrees Celsius (or 39.2 degrees Fahrenheit), when lunar illumination was below 30%, and when there was no rain. Spahr was satisfied with the results: 'We validated that commercially produced, unclassified information can yield predictive intelligence.' Ever since the Russian invasion, launched in 2022, Ukraine has become a testing ground for AI in warfare. Outgunned and outmanned, Ukrainian forces have resorted to improvisation, jerry-rigging off-the-shelf devices to transform them into lethal autonomous weapons. The Russian invaders, too, have employed AI, conducting cyberattacks and GPS-jamming systems. Ukraine's Saker Scout quadcopters 'can find, identify, and attack 64 types of Russian 'military objects' on their own.' These drones are designed to operate autonomously, and unlike other drones that Ukrainian forces have deployed, Russia cannot jam them. By using code found online and hobbyist computers like Raspberry Pi, easily obtained from hardware stores, Ukrainians are able to construct innovative killer robots. Apart from drones, which can be operated with a smartphone, Ukrainians have built a gun turret with autonomous targeting operated with the same controller used by a PlayStation or a tablet. The gun, called Wolly because it bears a resemblance to the Pixar robot WALL-E, can auto-lock on a target up to 1,000 meters (3,280 feet) away and shift between preprogrammed positions to quickly cover a broad area. The manufacturer is also developing a gun capable of hitting moving targets. It can automatically identify targets as they come over the horizon. The gun targets and aims automatically; all that's left for the operator to do is press the button and shoot. Many Ukrainian drones, which look like those you can find at Walmart, are called First Person View (FPV) drones. Capable of flying 100 miles per hour, FPV drones have four propellers and a mounted camera that uses wireless to send footage of their flights back to operators. With a bomb on board, an FPV can be converted into a weapon that can take out a tank. They're cheap, too; one manufacturer, Vyriy, charges $400 each, a small price to pay to disable a tank worth millions of dollars. Vyriy derives its name from a mythical land in Slavic folktales. If one kamikaze drone is good, dozens of them are better insofar as the greater their number, the greater the chance there is of several reaching their targets. In nature, a swarm of ants behaves as a single living organism, whether the task is collecting food or building a nest. Analogously, a swarm of autonomous drones could act as a single organism—no humans necessary—carrying out a mission regardless of how many are disabled or crash to the ground or whether communication from the ground is disrupted or terminated. Although humans are still in the 'loop,' these weapons could equally be made entirely autonomous. In other words, they could decide which targets to strike without human intervention. It isn't as if Ukraine has adopted AI weaponry without any tech experience. In the words of New York Times reporter Paul Mozer, 'Ukraine has been a bit of a back office for the global technology industry for a long time.' The country already had a substantial pool of coders and skilled experts who, under emergency conditions, were able to make the transition from civilian uses (such as a dating app) to military purposes. As Mozer reported: 'What they're doing is they're taking basic code that is around, combining it with some new data from the war, and making it into something entirely different, which is a weapon.' The reality is, 'there's a lot of cool, exciting stuff happening in the big defense primes,' says P.W. Singer, an author who writes about war and tech. 'There's a lot of cool, exciting stuff happening in the big-tech Silicon Valley companies. There's a lot of cool, exciting stuff happening in small startups.' One of those smaller startups is Anduril. After selling the popular virtual reality headset Oculus to Facebook (now Meta), Palmer Luckey, an entrepreneur in his early thirties, went on to found an AI weapons company that is supplying drones to Ukraine. 'Ukraine is a very challenging environment to learn in,' he says. 'I've heard various estimates from the Ukrainians themselves that any given drone typically has a lifespan of about four weeks. The question is, 'Can you respond and adapt?' Anduril, named after a sword in 'The Lord of the Rings', has sold its devices to ten countries, including the US. 'I had this belief that the major defense companies didn't have the right talent or the right incentive structure to invest in things like artificial intelligence, autonomy, robotics,' says Luckey. His company's drone, called ALTIUS, is intended to be fired out of a tube and unfold itself, extending its wings and tail; then, steering with a propeller, it acts like a plane capable of carrying a 30-pound warhead. Luckey believes that his approach will result in more AI weapons being built in less time and at a lower cost than could be achieved by traditional defense contractors like McDonnell Douglas. Anduril, founded in 2017, is also developing the Dive-LD, a drone that will be used for surveys in littoral and deep water. 'It's an autonomous underwater vehicle that is able to go very, very long distances, dive to a depth of about 6,000 meters (almost 20,000 feet), which is deep enough to go to the bottom of almost any ocean,' says Luckey. Ukraine is already making its own sea drones—essentially jet skis packed with explosives—which have inflicted severe damage on the Russian navy in the Black Sea. As Anduril's CEO Brian Schimpf admits, the introduction of Anduril's drones to Ukraine has yet to produce any significant results, although he believes that will change. Once they're launched, these drones will not require guidance from an operator on the ground, making it difficult for the Russians to destroy or disable them by jamming their signals. 'The autonomy onboard is really what sets it apart,' Luckey says. 'It's not a remote-controlled plane. There's a brain on it that is able to look for targets, identify targets, and fly into those targets.' However, for every innovative weapon system the Ukrainians develop, the Russians counter it with a system that renders it useless. 'Technologies that worked really well even a few months ago are now constantly having to change,' says Jacquelyn Schneider, who studies military technology as a fellow at the Hoover Institution, 'And the big difference I do see is that software changes the rate of change.' In their invasion of Gaza, the Israel Defense Forces (IDF) have increasingly relied on a program supported by artificial intelligence to target Hamas operatives, with problematic consequences. According to an April 2024 report by +972 Magazine (an Israeli-Palestinian publication) and Local Call, a Hebrew language news site, the IDF has been implementing a program known as 'Lavender,' whose influence on the military's operations is so profound that intelligence officials have essentially treated the outputs of the AI machine 'as if it were a human decision.' Lavender was developed by the elite Unit 8200, which is comparable to the National Security Agency in the US or the Government Communications Headquarters in the UK. The Israeli government has defended Lavender for its practicality and efficiency. 'The Israeli military uses AI to augment the decision-making processes of human operators. This use is in accordance with international humanitarian law, as applied by the modern Armed Forces in many asymmetric wars since September 11, 2001,' says Magda Pacholska, a researcher at the TMC Asser Institute and specialist in the intersection between disruptive technologies and military law. The data collected to identify militants that were used to develop Lavender comes from the more than 2.3 million residents of the Gaza Strip, which was under intense surveillance prior to the Gaza invasion in 2023. The report states that as many as 37,000 Palestinians were designated as suspected militants who were selected as potential targets. Lavender's kill lists were prepared in advance of the invasion, launched in response to the Hamas attack of October 7, 2023, which left about 1,200 dead and about 250 hostages taken from Israel. A related AI program, which tracked the movements of individuals on the Lavender list, was called 'Where's Daddy?' Sources for the +972 Magazine report said that initially, there was 'no requirement to thoroughly check why the machine made those choices (of targets) or to examine the raw intelligence data on which they were based.' The officials in charge, these sources said, acted as a 'rubber stamp' for the machine's decisions before authorizing a bombing. One intelligence officer who spoke to +972 admitted as much: 'I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.' It was already known that the Lavender program made errors in 10 percent of the cases, meaning that a fraction of the individuals selected as targets might have had no connection with Hamas or any other militant group. The strikes generally occurred at night while the targeted individuals were more likely to be at home, which posed a risk of killing or wounding their families as well. A score was created for each individual, ranging from 1 to 100, based on how closely he was linked to the armed wing of Hamas or Islamic Jihad. Those with a high score were killed along with their families and neighbors despite the fact that officers reportedly did little to verify the potential targets identified by Lavender, citing 'efficiency' reasons. 'This is unparalleled, in my memory,' said one intelligence officer who used Lavender, adding that his colleagues had more faith in a 'statistical mechanism' than a grieving soldier. 'Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.' The IDF had previously used another AI system called 'The Gospel,' which was described in a previous investigation by the magazine, as well as in the Israeli military's own publications, to target buildings and structures suspected of harboring militants. 'The Gospel' draws on millions of items of data, producing target lists more than 50 times faster than a team of human intelligence officers ever could. It was used to strike 100 targets a day in the first two months of the Gaza fighting, roughly five times more than in a similar conflict there a decade ago. Those structures of political or military significance for Hamas are known as 'power targets.' If an AI weapon is autonomous, it needs to have the capacity for accurate perception. That's to say, if it mistakes a civilian car for a military target, its response rate isn't relevant. The civilians in the car die regardless. In many cases, of course, AI systems have excelled at perception as AI-powered machines and algorithms have become refined. When, for instance, the Russian military conducted a test of 80 UAVs simultaneously flying over Syrian battlefields with unified visualization, then Russian Defense Minister Sergei Shoigu compared it to a 'semi-fantastic film' that revealed all potential targets. But problems can creep in. In designing an AI weapon, developers first need access to data. Many AI systems are trained using data that has been labeled by an expert system (e.g., labeling scenes that include an air defense battery), usually a human. An AI's image-processing capability won't function well when given images that are different from its training set—for example, pictures produced where lighting conditions are poor, that are at an obtuse angle, or that are partially obscured. AI recognition systems don't understand what the image is; rather, they learn textures and gradients of the image's pixels. That means that an AI system may correctly recognize a part of an image but not its entirety, which can result in misclassification. To better defend AI against deceptive images, engineers subject them to 'adversarial training.' This involves feeding a classifier adversarial images so it can identify and ignore those that aren't going to be targeted. Research by Nicolas Papernot, a graduate student at Pennsylvania State University, shows that a system, even bolstered by adversarial training, may be ineffective if overwhelmed by the sheer number of images. Adversarial images take advantage of a feature found in many AI systems known as 'decision boundaries.' These boundaries are the invisible rules that instruct a system whether it is perceiving a lion or a leopard. The objective would be to create a mental map with lions in one sector and leopards in another. The line dividing these two sectors—the border at which a lion becomes a leopard or leopard a lion—is known as the decision boundary. Jeff Clune, who has also studied adversarial training, remains dubious about such classification systems because they're too arbitrary.'All you're doing with these networks is training them to draw lines between clusters of data rather than deeply modeling what it is to be [a] leopard or a lion.' Large datasets are often labeled by companies that employ manual methods. Obtaining and sharing datasets is a challenge, especially for an organization that prefers to classify data and restrict access to it. A military dataset may contain images produced by thermal-imaging systems, for instance, but unless this dataset is shared with developers, an AI weapon wouldn't be as effective. For example, AI devices that rely on chatbots limited to hundreds of words might not be able to completely replace a human with a much larger vocabulary. AI systems are also hampered by their inability to multitask. A human can identify an enemy vehicle, decide on a weapon system to employ against it, predict its path, and then engage the target. An AI system can't duplicate these steps. At this point, a system trained to identify a T-90 tank most likely would be unable to identify a Chinese Type 99 tank, despite the fact that they are both tanks and both tasks require image recognition. Many researchers are trying to solve this problem by working to enable systems to transfer their learning, but such systems are years away from production. Predictably, adversaries will try to take advantage of these weaknesses by fooling image recognition engines and sensors. They may also try mounting cyberattacks to evade intrusion detection systems or feed altered data to AI systems that will supply them with false requirements. The US Department of Defense has been more partial to contracting for and building hardware than to implementing new technologies. All the same, the Air Force, in cooperation with Boeing, General Atomics, and a company called Kratos, is developing AI-powered drones. The Air Force is also testing pilotless XQ-58A Valkyrie experimental aircraft run by artificial intelligence. This next-generation drone is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets. The objective is to give human pilots a swarm of highly capable robot wingmen to deploy in battle. The Valkyrie is not autonomous, however. Although it will use AI and sensors to identify and evaluate enemy threats, it will still be up to pilots to decide whether or not to strike the target. Pentagon officials may not be deploying autonomous weapons in battle yet, but they are testing and perfecting weapons that will not rely on human intervention. One example is the Army's Project Convergence. In a test, conducted as part of the project, held in August 2020 at the Yuma Proving Ground in Arizona, the Army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at a base in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. 'This entire sequence was supposedly accomplished within 20 seconds,' the Congressional Research Service later reported. In a US program known as the Replicator initiative, the Pentagon said it planned to mass-produce thousands of autonomous drones. However, no official policy has condoned the use of autonomous weapons, which would allow devices to decide whether to strike a target without a human's approval. The Navy has an AI equivalent of Project Convergence called 'Project Overmatch.' In the words of Adm. Michael Gilday, chief of naval operations, this is intended 'to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain.' Very little has been revealed about the project. About 7,000 analysts employed by the National Security Agency (NSA) are trying to integrate AI into its operations, according to General Timothy Haugh, who serves as the NSA Director, US Cyber Command Commander and Chief of the Central Security Service. General Haugh has disclosed that as of 2024, the NSA is engaged in 170 AI projects, of which 10 are considered critical to national security. 'Those other 160, we want to create opportunities for people to experiment, leverage, and compliantly use,' he says. At present, though, AI is still regarded as a supplement to conventional platforms. AI is also envisioned as playing four additional roles: automating planning and strategy; fusing and interpreting signals more efficiently than humans or conventional systems can do; aiding space-based systems, mainly by collecting and synthesizing information to counter hypersonics; and enabling next-generation cyber and information warfare capabilities. Although the use of autonomous weapons has been a subject of debate for decades, few observers expect any international deal to establish new regulations, especially as the US, China, Israel, Russia, and others race to develop even more advanced weapons. 'The geopolitics makes it impossible,' says Alexander Kmentt, Austria's top negotiator on autonomous weapons at the UN. 'These weapons will be used, and they'll be used in the military arsenal of pretty much everybody.' Despite such challenges, Human Rights Watch has called for 'the urgent negotiation and adoption of a legally binding instrument to prohibit and regulate autonomous weapons systems.' It has launched the Campaign to Stop Killer Robots, which the human rights organization says has been joined by more than 270 groups and 70 countries. Even though the controversy has centered around autonomous weapons, Brian Schimpf, CEO of AI drone manufacturer Anduril, has another perspective. He says AI weapons are 'not about taking humans out of the loop. I don't think that's the right ethical framework. This is really about how we make human decision-makers more effective and more accountable [for] their decisions.' All the same, autonomous AI weapons are already under development. Aside from the ethics of relying on a weapon to make life-and-death decisions, there is a problem with AI itself. Errors and miscalculations are relatively common. Algorithms underlying the operations of AI systems are capable of making mistakes—'hallucinations'—in which seemingly reasonable results turn out to be entirely illusory. That could have profound implications for deploying AI weapons that operate with deeply flawed instructions undetectable by human operators. In a particularly dystopian scenario, an adversary might substitute robot generals for human ones, forcing the US to do the same, with the result that AI systems may be pitted against one another on the battlefield with unpredictable and possibly catastrophic consequences. Dr Elke Schwarz of Queen Mary University of London views the AI weapon dilemma through a theoretical framework that relies on political science and empirical investigations in her consideration of the ethical dimensions of AI in warfare. She believes that the integration of AI-enabled weapon systems facilitates the objectification of human targets, leading to heightened tolerance for collateral damage. In her view, automation can 'weaken moral agency among operators of AI-enabled targeting systems, diminishing their capacity for ethical decision-making.' The bias towards autonomous systems may also encourage the defense industry to rush headlong into funding military AI systems, 'influencing perceptions of responsible AI use in warfare.' She urges policymakers to take risks into account before it's too late. '(T)he effect of AI is much, much more than the machine gun or plane. It is more like the shift from muscle power to machine power in the last Industrial Revolution,' says Peter Singer, a professor at Arizona State University and a strategist and senior fellow at the US think tank New America, who has written extensively about AI and warfare. 'I believe that the advent of AI on the software side and its application into robotics on the hardware side is the equivalent of the industrial revolution when we saw mechanization.' This transformation raises new questions 'of right and wrong that we weren't wrestling with before.' He advocates setting 'frameworks to govern the use of AI in warfare' that should apply to those people who are working on the design and use. One of the issues Singer calls 'machine permissibility' is what the machine should be allowed to do apart from human control. He calls attention to a second issue 'that we've never dealt with before,' which is 'machine accountability.' 'If something happens, who do we hold responsible if it is the machine that takes the action? It's very easy to understand that with a regular car, it's harder to understand that with a so-called driverless car.' On the battlefield, would the machine be held responsible if the target was mistaken or if civilians were killed as a result? Leslie Alan Horvitz is an author and journalist specializing in science and a contributor to the Observatory. His nonfiction books include 'Eureka: Scientific Breakthroughs That Changed the World,' 'Understanding Depression' with Dr Raymond DePaulo of Johns Hopkins University, and 'The Essential Book of Weather Lore.' His articles have been published by Travel and Leisure, Scholastic, Washington Times and Insight on the News, among others. He has served on the board of Art Omi and is a member of PEN America. Horvitz is based in New York City. You can find him online at This article was produced for the Observatory by the Independent Media Institute and is republished here with permission.

The Rise Of AI Warfare: How Autonomous Weapons And Cognitive Warfare Are Reshaping Global Military Strategy
The Rise Of AI Warfare: How Autonomous Weapons And Cognitive Warfare Are Reshaping Global Military Strategy

Scoop

time15-05-2025

  • Science
  • Scoop

The Rise Of AI Warfare: How Autonomous Weapons And Cognitive Warfare Are Reshaping Global Military Strategy

In the 1983 film War Games, a supercomputer known as WOPR (for War Operation Plan Response) is about to provoke a nuclear war between the United States and the Soviet Union, but because of the ingenuity of a teenager (played by Matthew Broderick), catastrophe is averted. In the first Terminator film, which was released a year later, a supercomputer called 'Skynet' decides to exterminate humanity because it's perceived as a threat to its existence rather than to protect American nuclear weapons. Although these films offered audiences grim scenarios of intelligent machines running amok, they were also prophetic. Artificial intelligence (AI) is so commonplace that it's routinely applied during a simple Google search. That it is also being integrated into military strategies is hardly any surprise. It's just that we have little understanding of the capacity of these high-tech weapons (those that are now ready for use and those in development). Nor are we prepared for systems that have the capacity to transform warfare forever. Throughout history, it is human intelligence that uses the technology, not the technology itself, which has won or lost wars. That may change in the future when human intelligence is focused instead on creating systems that are more capable on the battlefield than those of the adversary. An 'Exponential, Insurmountable Surprise' Artificial intelligence isn't a technology that can be easily detected, monitored, or banned, as Amir Husain, the founder and CEO of an AI company, SparkCognition, pointed out in an essay for Media News. Integrating AI elements—visual recognition, language analysis, simulation-based prediction, and advanced forms of search—with existing technologies and platforms 'can rapidly yield entirely new and unforeseen capabilities.' The result 'can create exponential, insurmountable surprise,' Hussain writes. Advanced technology in warfare is already widespread. The use of uncrewed aerial vehicles (UAVs)—commonly known as drones—in military settings has set off warnings about 'killer robots.' What happens when drones are no longer controlled by humans and can execute military missions on their own? These drones aren't limited to the air; they can operate on the ground or underwater as well. The introduction of AI, effectively giving these weapons the capacity for autonomy, isn't far off. Moreover, they're cheap to produce and cheap to purchase. The Russians are buying drones from Iran for use in their war in Ukraine, and the Ukrainians have been putting together a cottage industry constructing drones of their own against the Russians. The relative ease with which a commercial drone can be converted into one with a military application also blurs the line between commercial and military enterprises. At this point, though, humans are still in charge. A similar problem can be seen in information-gathering systems that have dual uses, including satellites, manned and unmanned aircraft, ground and undersea radars, and sensors, all of which have both commercial and military applications. AI can process vast amounts of data from all these systems and then discern meaningful patterns, identifying changes that humans might never notice. American forces were stymied to some degree in wars in Iraq and Afghanistan because they could not process large amounts of data. Even now, remotely piloted UAVs are using AI for autonomous takeoff, landing, and routine flight. All that's left for human operators to do is concentrate on tactical decisions, such as selecting attack targets and executing attacks. AI also allows these systems to operate rapidly, determining actions at speeds that are seldom possible if humans are part of the decision-making process. Until now, decision-making speed has been the most important aspect of warfare. If, however, AI systems go head-to-head against humans, AI will invariably come out ahead. However, the possibility that AI systems eliminate the human factor terrifies people who don't want to see an apocalyptic scenario on celluloid come to pass in reality. Automated Versus Autonomous A distinction needs to be made between the term 'autonomous' and the term 'automated.' If we are controlling the drone, then the drone is automated. But if the drone is programmed to act on its own initiative, we would say it is autonomous. But does the autonomous weapon describe the actual weapon—i.e., a missile on a drone—or the drone itself? Take, for example, the Global Hawk military UAV (drone). It is automated insofar as it is controlled by an operator on the ground, and yet if it loses communication with the ground, the Golden Hawk can land on its own. Does that make it automated or autonomous? Or is it both? The most important question is whether the system is safety-critical. Translated, that means whether it has the decision-making capacity to use a weapon against a target without intervention from its human operator. It is possible, for example, for a drone to strike a static military target on its own (such as an enemy military base) but not a human target because of the fear that innocent civilians could be injured or killed as collateral damage. Many countries have already developed drones with real-time imagery capable of acting autonomously in the former instance, but not when it comes to human targets. Drones aren't the only weapons that can act autonomously. Military systems are being developed by the U.S., China, and several countries in Europe that can act autonomously in the air, on the ground, in water, and underwater with varying degrees of success. Several types of autonomous helicopters designed so that a soldier can direct them in the field with a smartphone are in development in the U.S., Europe, and China. Autonomous ground vehicles, such as tanks and transport vehicles, and autonomous underwater vehicles are also in development. In almost all cases, however, the agencies developing these technologies are struggling to make the leap from development to operational implementation. There are many reasons for the lack of success in bringing these technologies to maturity, including cost and unforeseen technical issues, but equally problematic are organizational and cultural barriers. The U.S. has, for instance, struggled to bring autonomous UAVs to operational status, primarily due to organizational infighting and prioritization in favor of manned aircraft. The Future Warrior In the battleground of the future, elite soldiers may rely on a head-up display that feeds them a wealth of information that is collected and routed through supercomputers carried in their backpacks using an AI engine. With AI, the data is instantly analyzed, streamlined, and fed back into the head-up display. This is one of many potential scenarios presented by U.S. Defense Department officials. The Pentagon has embraced a relatively simple concept: the 'hyper-enabled operator.' The objective of this concept is to give Special Forces 'cognitive overmatch' on the battlefield, or 'the ability to dominate the situation by making informed decisions faster than the opponent.' In other words, they will be able to make decisions based on the information they are receiving more rapidly than their enemy. The decision-making model for the military is called the 'OODA loop' for 'observe, orient, decide, act.' That will come about using computers that register all relevant data and distill them into actionable information through a simple interface like a head-up display. This display will also offer a 'visual environment translation' system designed to convert foreign language inputs into clear English in real time. Known as VITA, the system encompasses both a visual environment translation effort and voice-to-voice translation capabilities. The translation engine will allow the operator to 'engage in effective conversations where it was previously impossible.' VITA, which stands for Versatile Intelligent Translation Assistant, offers users language capabilities in Russian, Ukrainian, and Chinese, including Mandarin, a Chinese dialect. Operators could use their smartphones to scan a street in a foreign country, for example, and immediately obtain a translation of street signs in real-time. Adversary AI Systems Military experts divide adversarial attacks into four categories: evasion, inference, poisoning, and extraction. These types of attacks are easily accomplished and often don't require computing skills. An enemy engaged in evasive attacks could attempt to deceive an AI weapon to avoid detection—hiding a cyberattack, for example, or convincing a sensor that a tank is a school bus. This may require the development of a new type of AI camouflage, such as strategic tape placement, that can fool AI. Inference attacks occur when an adversary acquires information about an AI system that allows evasive techniques. Poisoning attacks target AI systems during training, interfering with access to the datasets used to train military tools—mislabeling images of vehicles to dupe targeting systems, for instance, or manipulating maintenance data designed to classify imminent system failure as a regular operation. Extraction attacks exploit access to the AI's interface to learn enough about the AI's operation to create a parallel model of the system. If AI systems are not secure from unauthorized users, then an adversary's users could predict decisions made by those systems and use those predictions to their advantage. For instance, they could predict how an AI-controlled unmanned system will respond to specific visual and electromagnetic stimuli and then proceed to alter its route and behavior. Deceptive attacks have become increasingly common, as illustrated by cases involving image classification algorithms that are deceived into perceiving images that aren't there, confusing the meaning of images, and mistaking a turtle for a rifle, for instance. Similarly, autonomous vehicles could be forced to swerve into the wrong lane or speed through a stop sign. In 2019, China announced a new military strategy, Intelligentized Warfare, which utilizes AI. Officials of the Chinese People's Liberation Army have stated that their forces can overtake the U.S. military by using AI. One of its intentions is to use this high-tech type of warfare to bring Taiwan under its control without waging conventional warfare. However, only a few of the many Chinese studies on intelligentized warfare have focused on replacing guns with AI. On the other hand, Chinese strategists have made no secret of their intention to control the enemy's will directly. That would include the U.S. president, members of Congress, combatant commanders, and citizens. 'Intelligence dominance'—also known as cognitive warfare or 'control of the brain'—is seen as the new battleground in intelligentized warfare, putting AI to a very different use than most American and allied discussions have envisioned. According to the Pentagon's 2022 report on Chinese military developments, the People's Liberation Army is being trained and equipped to use AI-enabled sensors and computer networks to 'rapidly identify key vulnerabilities in the U.S. operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.' Controlling an adversary's mind can affect not just someone's perceptions of their surroundings but, ultimately, their decisions. For the People's Liberation Army, cognitive warfare is equal to the other domains of conflict, which are air, land, and sea. In that respect, social media is considered a key battlefield. Russia has also been developing its own AI capacity. As early as 2014, the Russians inaugurated a National Defense Control Center in Moscow, a centralized command post for assessing and responding to global threats. The center was designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses. Russia has declared that it will eventually develop an AI system capable of running the world. Russians are already using AI in Ukraine to jam wireless signals connecting Ukrainian drones to the satellites they rely on for navigation, causing the machines to lose their way and plummet to Earth. The Russian Ministry of Defense (MOD) has explored ways in which AI systems can be developed for uncrewed systems for the air, maritime, and ground domains. At the same time, at least in the short term, official policy is predicated on the belief that humans must remain firmly in the loop. Meanwhile, the Russians are trying to improve UAV capabilities with AI as a mechanism for command, control, and communications. MOD also emphasizes the use of AI for data collection and analysis as a natural evolution from the current 'digital' combat technology and systems development. 'Raven Sentry': AI in the U.S. War in Afghanistan The use of AI on the battlefield by U.S. intelligence, while brief, showed promising results. 'Raven Sentry,' an AI tool launched in 2019 by a team of American intelligence officers (known as the 'nerd locker'), with help from Silicon Valley expertise, was intended to forecast insurgent attacks. The initial use of AI came at a time when U.S. bases were closing, troop numbers were falling, and intelligence resources were being diverted. Raven Sentry relied on open-source data. 'We noticed an opportunity presented by the increased number of commercial satellites and the availability of news reports on the Internet, the proliferation of social media postings, and messaging apps with massive membership,' says Col. Thomas Spahr, chief of staff of the Resolute Support J2 intelligence mission in Kabul, Afghanistan, from July 2019 to July 2020. The AI tool also drew on historical patterns based on insurgent activities in Afghanistan going back 40 years, which encompassed the Soviet occupation of the country in the 1980s. Environmental factors were also considered. 'Historically, insurgents attack on certain days of the year or holidays, for example, or during certain weather and illumination conditions,' Spahr notes. He adds, 'The beauty of the AI is that it continues to update that template. The machine would learn as it absorbed more data.' Before its demise in 2021 (with the U.S. withdrawal from Afghanistan), Raven Sentry had demonstrated its feasibility, predicting an insurgent attack with 70 percent accuracy. The AI tool predicted that attacks were more likely to occur when the temperature was above 4 degrees Celsius (or 39.2 degrees Fahrenheit), when lunar illumination was below 30 percent, and when there was no rain. Spahr was satisfied with the results: 'We validated that commercially produced, unclassified information can yield predictive intelligence.' Ukraine as Testing Ground for AI Ever since the Russian invasion, launched in 2022, Ukraine has become a testing ground for AI in warfare. Outgunned and outmanned, Ukrainian forces have resorted to improvisation, jerry-rigging off-the-shelf devices to transform them into lethal autonomous weapons. The Russian invaders, too, have employed AI, conducting cyberattacks and GPS-jamming systems. Ukraine's Saker Scout quadcopters 'can find, identify, and attack 64 types of Russian 'military objects' on their own.' These drones are designed to operate autonomously, and unlike other drones that Ukrainian forces have deployed, Russia cannot jam them. By using code found online and hobbyist computers like Raspberry Pi, easily obtained from hardware stores, Ukrainians are able to construct innovative killer robots. Apart from drones, which can be operated with a smartphone, Ukrainians have built a gun turret with autonomous targeting operated with the same controller used by a PlayStation or a tablet. The gun, called Wolly because it bears a resemblance to the Pixar robot WALL-E, can auto-lock on a target up to 1,000 meters (3,280 feet) away and shift between preprogrammed positions to quickly cover a broad area. The manufacturer is also developing a gun capable of hitting moving targets. It can automatically identify targets as they come over the horizon. The gun targets and aims automatically; all that's left for the operator to do is press the button and shoot. Many Ukrainian drones, which look like those you can find at Walmart, are called First Person View (FPV) drones. Capable of flying 100 miles per hour, FPV drones have four propellers and a mounted camera that uses wireless to send footage of their flights back to operators. With a bomb on board, an FPV can be converted into a weapon that can take out a tank. They're cheap, too; one manufacturer, Vyriy, charges $400 each, a small price to pay to disable a tank worth millions of dollars. Vyriy derives its name from a mythical land in Slavic folktales. If one kamikaze drone is good, dozens of them are better insofar as the greater their number, the greater the chance there is of several reaching their targets. In nature, a swarm of ants behaves as a single living organism, whether the task is collecting food or building a nest. Analogously, a swarm of autonomous drones could act as a single organism—no humans necessary—carrying out a mission regardless of how many are disabled or crash to the ground or whether communication from the ground is disrupted or terminated. Although humans are still in the 'loop,' these weapons could equally be made entirely autonomous. In other words, they could decide which targets to strike without human intervention. It isn't as if Ukraine has adopted AI weaponry without any tech experience. In the words of New York Times reporter Paul Mozer, 'Ukraine has been a bit of a back office for the global technology industry for a long time.' The country already had a substantial pool of coders and skilled experts who, under emergency conditions, were able to make the transition from civilian uses (such as a dating app) to military purposes. As Mozer reported: 'What they're doing is they're taking basic code that is around, combining it with some new data from the war, and making it into something entirely different, which is a weapon.' The reality is, 'there's a lot of cool, exciting stuff happening in the big defense primes,' says P.W. Singer, an author who writes about war and tech. 'There's a lot of cool, exciting stuff happening in the big-tech Silicon Valley companies. There's a lot of cool, exciting stuff happening in small startups.' One of those smaller startups is Anduril. After selling the popular virtual reality headset Oculus to Facebook (now Meta), Palmer Luckey, an entrepreneur in his early thirties, went on to found an AI weapons company that is supplying drones to Ukraine. 'Ukraine is a very challenging environment to learn in,' he says. 'I've heard various estimates from the Ukrainians themselves that any given drone typically has a lifespan of about four weeks. The question is, 'Can you respond and adapt?' Anduril, named after a sword in The Lord of the Rings, has sold its devices to ten countries, including the U.S. 'I had this belief that the major defense companies didn't have the right talent or the right incentive structure to invest in things like artificial intelligence, autonomy, robotics,' says Luckey. His company's drone, called ALTIUS, is intended to be fired out of a tube and unfold itself, extending its wings and tail; then, steering with a propeller, it acts like a plane capable of carrying a 30-pound warhead. Luckey believes that his approach will result in more AI weapons being built in less time and at a lower cost than could be achieved by traditional defense contractors like McDonnell Douglas. Anduril, founded in 2017, is also developing the Dive-LD, a drone that will be used for surveys in littoral and deep water. 'It's an autonomous underwater vehicle that is able to go very, very long distances, dive to a depth of about 6,000 meters (almost 20,000 feet), which is deep enough to go to the bottom of almost any ocean,' says Luckey. Ukraine is already making its own sea drones—essentially jet skis packed with explosives—which have inflicted severe damage on the Russian navy in the Black Sea. As Anduril's CEO Brian Schimpf admits, the introduction of Anduril's drones to Ukraine has yet to produce any significant results, although he believes that will change. Once they're launched, these drones will not require guidance from an operator on the ground, making it difficult for the Russians to destroy or disable them by jamming their signals. 'The autonomy onboard is really what sets it apart,' Luckey says. 'It's not a remote-controlled plane. There's a brain on it that is able to look for targets, identify targets, and fly into those targets.' However, for every innovative weapon system the Ukrainians develop, the Russians counter it with a system that renders it useless. 'Technologies that worked really well even a few months ago are now constantly having to change,' says Jacquelyn Schneider, who studies military technology as a fellow at the Hoover Institution, 'And the big difference I do see is that software changes the rate of change.' The War in Gaza: Lavender In their invasion of Gaza, the Israel Defense Forces (IDF) have increasingly relied on a program supported by artificial intelligence to target Hamas operatives, with problematic consequences. According to an April 2024 report by +972 Magazine (an Israeli-Palestinian publication) and Local Call, a Hebrew language news site, the IDF has been implementing a program known as 'Lavender,' whose influence on the military's operations is so profound that intelligence officials have essentially treated the outputs of the AI machine 'as if it were a human decision.' Lavender was developed by the elite Unit 8200, which is comparable to the National Security Agency in the U.S. or the Government Communications Headquarters in the UK. The Israeli government has defended Lavender for its practicality and efficiency. 'The Israeli military uses AI to augment the decision-making processes of human operators. This use is in accordance with international humanitarian law, as applied by the modern Armed Forces in many asymmetric wars since September 11, 2001,' says Magda Pacholska, a researcher at the TMC Asser Institute and specialist in the intersection between disruptive technologies and military law. The data collected to identify militants that were used to develop Lavender comes from the more than 2.3 million residents of the Gaza Strip, which was under intense surveillance prior to the Gaza invasion in 2023. The report states that as many as 37,000 Palestinians were designated as suspected militants who were selected as potential targets. Lavender's kill lists were prepared in advance of the invasion, launched in response to the Hamas attack of October 7, 2023, which left about 1,200 dead and about 250 hostages taken from Israel. A related AI program, which tracked the movements of individuals on the Lavender list, was called 'Where's Daddy?' Sources for the +972 Magazine report said that initially, there was 'no requirement to thoroughly check why the machine made those choices (of targets) or to examine the raw intelligence data on which they were based.' The officials in charge, these sources said, acted as a 'rubber stamp' for the machine's decisions before authorizing a bombing. One intelligence officer who spoke to +972 admitted as much: 'I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.' It was already known that the Lavender program made errors in 10 percent of the cases, meaning that a fraction of the individuals selected as targets might have had no connection with Hamas or any other militant group. The strikes generally occurred at night while the targeted individuals were more likely to be at home, which posed a risk of killing or wounding their families as well. A score was created for each individual, ranging from 1 to 100, based on how closely he was linked to the armed wing of Hamas or Islamic Jihad. Those with a high score were killed along with their families and neighbors despite the fact that officers reportedly did little to verify the potential targets identified by Lavender, citing 'efficiency' reasons. 'This is unparalleled, in my memory,' said one intelligence officer who used Lavender, adding that his colleagues had more faith in a 'statistical mechanism' than a grieving soldier. 'Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.' The IDF had previously used another AI system called 'The Gospel,' which was described in a previous investigation by the magazine, as well as in the Israeli military's own publications, to target buildings and structures suspected of harboring militants. 'The Gospel' draws on millions of items of data, producing target lists more than 50 times faster than a team of human intelligence officers ever could. It was used to strike 100 targets a day in the first two months of the Gaza fighting, roughly five times more than in a similar conflict there a decade ago. Those structures of political or military significance for Hamas are known as 'power targets.' Weaknesses of AI Weapons If an AI weapon is autonomous, it needs to have the capacity for accurate perception. That's to say, if it mistakes a civilian car for a military target, its response rate isn't relevant. The civilians in the car die regardless. In many cases, of course, AI systems have excelled at perception as AI-powered machines and algorithms have become refined. When, for instance, the Russian military conducted a test of 80 UAVs simultaneously flying over Syrian battlefields with unified visualization, then Russian Defense Minister Sergei Shoigu compared it to a 'semi-fantastic film' that revealed all potential targets. But problems can creep in. In designing an AI weapon, developers first need access to data. Many AI systems are trained using data that has been labeled by an expert system (e.g., labeling scenes that include an air defense battery), usually a human. An AI's image-processing capability won't function well when given images that are different from its training set—for example, pictures produced where lighting conditions are poor, that are at an obtuse angle, or that are partially obscured. AI recognition systems don't understand what the image is; rather, they learn textures and gradients of the image's pixels. That means that an AI system may correctly recognize a part of an image but not its entirety, which can result in misclassification. To better defend AI against deceptive images, engineers subject them to 'adversarial training.' This involves feeding a classifier adversarial images so it can identify and ignore those that aren't going to be targeted. Research by Nicolas Papernot, a graduate student at Pennsylvania State University, shows that a system, even bolstered by adversarial training, may be ineffective if overwhelmed by the sheer number of images. Adversarial images take advantage of a feature found in many AI systems known as 'decision boundaries.' These boundaries are the invisible rules that instruct a system whether it is perceiving a lion or a leopard. The objective would be to create a mental map with lions in one sector and leopards in another. The line dividing these two sectors—the border at which a lion becomes a leopard or leopard a lion—is known as the decision boundary. Jeff Clune, who has also studied adversarial training, remains dubious about such classification systems because they're too arbitrary.'All you're doing with these networks is training them to draw lines between clusters of data rather than deeply modeling what it is to be [a] leopard or a lion.' Large datasets are often labeled by companies that employ manual methods. Obtaining and sharing datasets is a challenge, especially for an organization that prefers to classify data and restrict access to it. A military dataset may contain images produced by thermal-imaging systems, for instance, but unless this dataset is shared with developers, an AI weapon wouldn't be as effective. For example, AI devices that rely on chatbots limited to hundreds of words might not be able to completely replace a human with a much larger vocabulary. AI systems are also hampered by their inability to multitask. A human can identify an enemy vehicle, decide on a weapon system to employ against it, predict its path, and then engage the target. An AI system can't duplicate these steps. At this point, a system trained to identify a T-90 tank most likely would be unable to identify a Chinese Type 99 tank, despite the fact that they are both tanks and both tasks require image recognition. Many researchers are trying to solve this problem by working to enable systems to transfer their learning, but such systems are years away from production. Predictably, adversaries will try to take advantage of these weaknesses by fooling image recognition engines and sensors. They may also try mounting cyberattacks to evade intrusion detection systems or feed altered data to AI systems that will supply them with false requirements. U.S. Preparedness The U.S. Department of Defense has been more partial to contracting for and building hardware than to implementing new technologies. All the same, the Air Force, in cooperation with Boeing, General Atomics, and a company called Kratos, is developing AI-powered drones. The Air Force is also testing pilotless XQ-58A Valkyrie experimental aircraft run by artificial intelligence. This next-generation drone is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets. The objective is to give human pilots a swarm of highly capable robot wingmen to deploy in battle. The Valkyrie is not autonomous, however. Although it will use AI and sensors to identify and evaluate enemy threats, it will still be up to pilots to decide whether or not to strike the target. Pentagon officials may not be deploying autonomous weapons in battle yet, but they are testing and perfecting weapons that will not rely on human intervention. One example is the Army's Project Convergence. In a test, conducted as part of the project, held in August 2020 at the Yuma Proving Ground in Arizona, the Army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at a base in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. 'This entire sequence was supposedly accomplished within 20 seconds,' the Congressional Research Service later reported. In a U.S. program known as the Replicator initiative, the Pentagon said it planned to mass-produce thousands of autonomous drones. However, no official policy has condoned the use of autonomous weapons, which would allow devices to decide whether to strike a target without a human's approval. The Navy has an AI equivalent of Project Convergence called 'Project Overmatch.' In the words of Adm. Michael Gilday, chief of naval operations, this is intended 'to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain.' Very little has been revealed about the project. About 7,000 analysts employed by the National Security Agency (NSA) are trying to integrate AI into its operations, according to General Timothy Haugh, who serves as the NSA Director, U.S. Cyber Command Commander, and Chief of the Central Security Service. General Haugh has disclosed that as of 2024, the NSA is engaged in 170 AI projects, of which 10 are considered critical to national security. 'Those other 160, we want to create opportunities for people to experiment, leverage, and compliantly use,' he says. At present, though, AI is still regarded as a supplement to conventional platforms. AI is also envisioned as playing four additional roles: automating planning and strategy; fusing and interpreting signals more efficiently than humans or conventional systems can do; aiding space-based systems, mainly by collecting and synthesizing information to counter hypersonics; and enabling next-generation cyber and information warfare capabilities. Ethics of AI Use Although the use of autonomous weapons has been a subject of debate for decades, few observers expect any international deal to establish new regulations, especially as the U.S., China, Israel, Russia, and others race to develop even more advanced weapons. 'The geopolitics makes it impossible,' says Alexander Kmentt, Austria's top negotiator on autonomous weapons at the UN. 'These weapons will be used, and they'll be used in the military arsenal of pretty much everybody.' Despite such challenges, Human Rights Watch has called for 'the urgent negotiation and adoption of a legally binding instrument to prohibit and regulate autonomous weapons systems.' It has launched the Campaign to Stop Killer Robots, which the human rights organization says has been joined by more than 270 groups and 70 countries. Even though the controversy has centered around autonomous weapons, Brian Schimpf, CEO of AI drone manufacturer Anduril, has another perspective. He says AI weapons are 'not about taking humans out of the loop. I don't think that's the right ethical framework. This is really about how we make human decision-makers more effective and more accountable [for] their decisions.' All the same, autonomous AI weapons are already under development. Aside from the ethics of relying on a weapon to make life-and-death decisions, there is a problem with AI itself. Errors and miscalculations are relatively common. Algorithms underlying the operations of AI systems are capable of making mistakes—'hallucinations'—in which seemingly reasonable results turn out to be entirely illusory. That could have profound implications for deploying AI weapons that operate with deeply flawed instructions undetectable by human operators. In a particularly dystopian scenario, an adversary might substitute robot generals for human ones, forcing the U.S. to do the same, with the result that AI systems may be pitted against one another on the battlefield with unpredictable and possibly catastrophic consequences. Dr. Elke Schwarz of Queen Mary University of London views the AI weapon dilemma through a theoretical framework that relies on political science and empirical investigations in her consideration of the ethical dimensions of AI in warfare. She believes that the integration of AI-enabled weapon systems facilitates the objectification of human targets, leading to heightened tolerance for collateral damage. In her view, automation can 'weaken moral agency among operators of AI-enabled targeting systems, diminishing their capacity for ethical decision-making.' The bias towards autonomous systems may also encourage the defense industry to rush headlong into funding military AI systems, 'influencing perceptions of responsible AI use in warfare.' She urges policymakers to take risks into account before it's too late. '(T)he effect of AI is much, much more than the machine gun or plane. It is more like the shift from muscle power to machine power in the last Industrial Revolution,' says Peter Singer, a professor at Arizona State University and a strategist and senior fellow at the U.S. think tank New America, who has written extensively about AI and warfare. 'I believe that the advent of AI on the software side and its application into robotics on the hardware side is the equivalent of the industrial revolution when we saw mechanization.' This transformation raises new questions 'of right and wrong that we weren't wrestling with before.' He advocates setting 'frameworks to govern the use of AI in warfare' that should apply to those people who are working on the design and use. One of the issues Singer calls 'machine permissibility' is what the machine should be allowed to do apart from human control. He calls attention to a second issue 'that we've never dealt with before,' which is 'machine accountability.' 'If something happens, who do we hold responsible if it is the machine that takes the action? It's very easy to understand that with a regular car, it's harder to understand that with a so-called driverless car.' On the battlefield, would the machine be held responsible if the target was mistaken or if civilians were killed as a result? Author Bio: Leslie Alan Horvitz is an author and journalist specializing in science and a contributor to the Observatory. His nonfiction books include Eureka: Scientific Breakthroughs That Changed the World, Understanding Depression with Dr. Raymond DePaulo of Johns Hopkins University, and The Essential Book of Weather Lore.

Austin, Texas And The Dawn Of The Age Of Physical AI
Austin, Texas And The Dawn Of The Age Of Physical AI

Forbes

time22-03-2025

  • Business
  • Forbes

Austin, Texas And The Dawn Of The Age Of Physical AI

I've always believed that the true power of technology lies in its ability to transform the physical world we inhabit. Since 1996, when I arrived here as a Computer Science student at the University of Texas, I've watched Austin, Texas, evolve from a quirky, music-driven outpost into a global hub of innovation—a city uniquely positioned to lead where the digital meets the physical. In 2013, I founded SparkCognition (now Avathon), an AI company born and raised in this vibrant ecosystem, I've seen firsthand how Austin's blend of academic prowess, open culture, and breakneck growth is setting the stage for a new era. Today, I'm convinced that Austin can become the crucible for a revolution in intelligent infrastructure—a gift to the world that could redefine prosperity for billions. I don't believe success lies in Austin chasing Silicon Valley's shadow. Instead, it lies in forging our own path. Imitation is, as Oscar Wilde put it, 'the sincerest form of flattery that mediocrity pays to greatness.' And Austin isn't mediocre. We don't need to be Silicon Anything. We just need to carve our own path and become a place where tangible constructs of technology meet cutting-edge ideas. In Austin today, you'll find Tesla and Samsung factories humming alongside the Army Futures Command, drone startups buzzing in every sector from defense to recreation, and companies like one I founded in partnership with Boeing, SkyGrid, marrying century-old aviation expertise. with cutting-edge AI. This convergence isn't accidental. It's the result of a city that's academically electric—thanks to UT Austin's world-class programs—and bold enough to embrace growth without apology. I came to this realization early. Back in 2000, as a young entrepreneur fresh from UT Austin, in an interview with Money Magazine, I said that Austin had no middle ground: it would either rise and grow or shrink and decline. Stagnation wasn't an option, in my mind. Twenty-five years later, Austin's dynamism has proven me right. This city thrives because it welcomes brilliance from everywhere—nearly 75% of U.S. computer science and electrical engineering grad students are foreign-born, and many find a home here. That openness, paired with UT's legacy of breakthroughs fuels a flywheel of innovation unlike anywhere else. For example, the COVID spike protein produced by UT Austin Professor Jason McLellan, or the world's most powerful laser being developed by a UT Austin linked company called Tau Systems. This flywheel is then accelerated even further since Austin is now home to some of the world's leading investors, like Jim Breyer, the legendary VC who has been on the boards of Dell, Etsy, Facebook, Marvel, Spotify, Wal-Mart, and 21st Century Fox. And Bill Gurley, a General Partner at Benchmark, who ranks as one of the top VC dealmakers of the past fifty years. My formal journey in AI began at 16, when I started work on what was to be my first published paper on the subject. It's been a lifelong obsession since then—one that led me to drop out of a PhD at UT Austin and start building companies that harness its potential. When I founded SparkCognition more than a decade ago, this new wave of deep-learning-fueled AI was still in its infancy, mostly tinkering with digital toys like Atari game playing bots. But I saw a bigger canvas: the physical world. Infrastructure—think power grids, oil rigs, manufacturing plants—drives human life more than anything else. If we could make it smarter, safer, and more efficient, the impact would be seismic. That vision wasn't easy to execute. Physical systems don't behave like digital ones. A wind turbine doesn't churn out millions of failure examples—you might get one or two in a decade. So, at SparkCognition, we invented new tools: multimodal AI that fuses temperature, vibration, and visual data; neural architecture search to automate model-building; algorithms that learn normality to spot anomalies without requiring tons of examples. With nearly 200 patents and a team of PhDs based right here in Austin, we turned theory into action. Our systems optimized cement plants to save energy, monitored warehouses via CCTV to boost safety, and even now help maintain F-16s from for the Air Force. This wasn't just prediction, but rather prescription and action, scaling from single machines to entire facilities. The stakes in this new space of Physical AI, a space we helped pioneer, are massive. There's $100 trillion in global infrastructure that already exists today, with another $100 trillion to be built over the next few decades. These aren't assets you can rip out and replace—they'll be with us for generations. Making them intelligent is the only way forward. Imagine an oil rig that detects safety threats in real time, a factory that predicts downtime at a fraction of the cost, or a wind farm that maximizes output across every turbine. This is the future I'm betting on—a future where AI doesn't just run software but runs the world. Austin is uniquely equipped to lead this charge. It's not just our tech ecosystem—it's our mindset. We don't shy away from the hard problems. Take the work at SkyGrid, in collaboration with Boeing. This joint venture paired Boeing's aviation legacy with SparkCognition's AI capabilities and is headquartered right here in Austin. I remember the day we launched the idea, in a conversation between then-Boeing Chief Engineer Greg Hyslop, the leader of Boeing NeXT, Steve Nordlund and myself. Building on efforts like these, consider the aerospace, biotech, robotics and drone companies sprouting across the city, pushing the boundaries of autonomy. There's Firefly Aerospace building rockets, Paradromics building Brain Computer Interfaces (BCI), Argon Mechatronics and Apptronik building humanoids and SpecFive building decentralized, mesh networking gear. Dell, of course, is and has long been a pillar of the tech community. Austin is where the physical and digital don't just meet—they can truly merge into something new. What might the fruits of such a merger be? I call it synthetic physical autonomy - the very thing that enabled Physical AI: self-managing infrastructure that optimizes itself, resolves issues, and runs with minimal human oversight. Picture a power grid that heals itself, a warehouse that tracks every box without sensors, or a fleet of drones that defend themselves without phoning home. Far from sci-fi, this is what Austin's entrepreneurs are building today. If we can perfect these technologies in the heart of Texas, we'll hand the world a blueprint for prosperity. With 10 billion people to support, intelligent infrastructure could deliver more output at lower cost, lifting quality of life everywhere. Physical AI has massive implications. It is Airbus designing lighter aircraft parts with generative techniques and cutting the cost of air travel, Shell improving seismic imaging and finding energy resources more quickly, or Argon robots automating manufacturing through their ability to wield multiple tools precisely. But leadership in Physical AI won't come from slapping 'powered by AI' on a pitch deck. It will come from real understanding of the technology and the domain to which it is applied. It will come by facing tough truths: that every assumption about your business—competition, customers, markets—is being upended by intelligent and exponential technologies. I honestly think many in Austin get this. We're not just innovators; our community is full of philosophers and artists and builders, asking what's next and then making it happen. For me, what's next is clear: a world where infrastructure - physical objects - are imbued with intelligence, propelling us further along the Kardashev scale, in prosperity. If Austin can spark that revolution, we'll cement our place not as a second-rate Silicon something, but as a first-rate force of nature. That's a legacy worth chasing.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store