Ukraine said it downed fighter jets with drone boats for the 'first time in history,' destroying two $50 million Russian aircraft
Ukraine said it shot down fighter jets with naval drones for the "first time in history."
Ukraine's military intelligence service said it downed two Russian Su-30s in the Black Sea on Friday.
Ukraine has developed a fleet of naval drones to counter Russia's navy.
Ukraine said it shot down two Russian fighter jets with naval drones, describing it as the "first time in history" the technology had destroyed a crewed combat aircraft.
A Defence Intelligence of Ukraine (GUR) special operations unit said on Saturday that it destroyed a Russian Su-30 fighter jet in the Black Sea on Friday by using a missile launched from a naval drone.
Lt. Gen. Kyrylo Budanov, the head of the GUR, then told The War Zone that a second Russian Su-30 was also downed by the missiles from the naval drones in the attack. The Su-30 fighter jets are estimated to cost about $50 million per unit.
The GUR shared a video of an aircraft in the sky that appears to have been shot from below, which shows an aircraft-shaped object breaking apart and falling.
The GUR said the strike was carried out by a missile launched from a Magura naval drone platform, which can carry missiles that the Ukrainian Main Directorate of Intelligence previously said would target Russian aircraft.
Budanov told The War Zone that Ukraine used the Magura-7 version of the naval drone and that it used AIM-9 Sidewinder infrared-guided air-to-air missiles.
The Su-30 is a multirole fighter that can do both air-to-air and air-to-ground attacks. Ukraine has destroyed others in its fight back against Russia's invasion.
The GUR said the jet on Friday "was engulfed in flames mid-air before crashing into the sea" after the attack, which was done in coordination with the Security Service of Ukraine and the Defence Forces of Ukraine.
It said the strike happened near Russia's Novorossiysk port in Western Russia. Russia previously moved many of its vessels there from Sevastopol, the headquarters of Russia's Black Sea fleet in the Russian-occupied Ukrainian region of Crimea, as Ukraine damaged so many of its vessels there with attacks.
Ukraine has also launched attacks on Novorossiysk.
Ukraine has developed a fleet of naval drones that have menaced Russia's navy.
They, along with Ukraine's other weaponry, have allowed Ukraine to largely neutralize Russia's Black Sea Fleet without having any real navy of its own.
The naval drones have also caused problems for Russia in the skies. Ukraine said in December that it destroyed a Russian helicopter with a naval drone for the first time, saying a Magura was used in that attack too.
A spokesperson for the Russian Ministry of Defence did not immediately respond to a Business Insider request for comment.
Read the original article on Business Insider
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
13 minutes ago
- Yahoo
IAEA team at Ukraine's Zaporizhzhia says it heard repeated rounds of gunfire
(Reuters) - International monitors at the Russian-held Zaporizhzhia nuclear power plant in Ukraine heard repeated rounds of gunfire that appeared to be aimed at drones reportedly attacking the site's training centre, the U.N.'s nuclear watchdog said on Thursday. Russian forces seized the Zaporizhzhia plant, Europe's largest nuclear facility with six reactors, in the early weeks of Moscow's 2022 invasion of Ukraine. Each side has since routinely accused the other of attacking the plant and posing a threat to nuclear safety. Monitors from the International Atomic Energy Agency reported hearing at least five explosions between 11:30 a.m. and 13:45 p.m. local time, each preceded by gunfire, an IAEA statement said. The statement gave no indication of the origin of the drones and said there were no reports of any damage to the centre. "Drones flying close to nuclear power plants could threaten their safety and security, with potentially serious consequences," IAEA Director General Rafael Grossi said. "As I have stated repeatedly during the war, such incidents must stop immediately." The statement said it was the fourth time this year that the training centre, located just outside the site perimeter, was reportedly targeted by drones. The plant's Russian management had earlier said Ukrainian drones had landed on the roof of the training center in "yet another attack" on the facility. It said there had been no casualties or damage. The Zaporizhzhia station, with all its reactors in shut down mode, produces no electricity. Before the war, it generated one-fifth of Ukraine's electricity. Grossi last week told Reuters that while Russia had "never hidden the fact" that it wanted to restart the plant, this could not be done soon as it lacked water for cooling and a stable power supply.
Yahoo
13 minutes ago
- Yahoo
Ukraine Just Demonstrated What AGI War Could Look Like
Engineers test drones at drone manufacturing facility in Odessa, Ukraine on June 01, 2025. The Ukrainian forces are producing new drones and electronic warfare systems. The facility is currently testing advanced models, including self-propelled land-based drone systems. Credit - Maksim Voytenko—Anadolu/Getty Images Bombers in flames on social media. Photos of trick shipping containers packed with drones. Defiant statements from both sides about the scale of the damage. On June 1, Ukraine targeted several Russian air bases using first-person view (FPV) drones, cheap aerial vehicles which are remotely operated by pilots using camera feeds. According to reports, Ukraine used machine-learning algorithms to guide the drones to the target area. The attack, dubbed 'Spider's Web', demonstrated the current hardware capabilities of modern warfare. And as companies and governments race to develop Artificial General Intelligence (AGI)—in which advanced artificial intelligence systems can quickly understand, learn, and apply knowledge—the operation also provides a glimpse into what the future of warfare could look like. The Security Service of Ukraine's (SBU) operation knocked out targets up to 8,000 kilometers (nearly 5,000 miles) from the frontlines. As the dust settles, analysts are starting to wonder whether anywhere is truly beyond the reach of FPV drones. Some reports suggest dozens of strategic bombers (some said to be capable of delivering nuclear weapons) were destroyed or disabled by 117 FPV drones, though Moscow countered that only a handful of planes were struck. Western assessments put the figure at no lower than 10. But the scale of the attack, while impressive, isn't its most remarkable aspect. After all, the operation follows a Russian drone attack at the end of May involving almost 500 unmanned aerial vehicles. Ukraine's attack may have been smaller, but it more than made up for it in logistical brilliance. First, the parts were smuggled into the country and the drones were assembled. Vasyl Maliuk, the head of the SBU tells the BBC that they were then loaded onto lorries with secret compartments and driven by unsuspecting Russian couriers to locations near air bases. When the shipments reached their destination, roofs on the lorries retracted to reveal the hidden hardware. And the drones took off. Spider's Web depended on three distinct but related capabilities: logistics to deliver the drones, deception to keep them hidden, and coordination to pilot dozens of them concurrently. Yes, the attack confirms that expendable drones are the weapon of the 21st century. But Ukraine's strike serves as a visceral example of how AGI will work as a warfighting tool—and how humans will work alongside AGI. Make no mistake, AGI-fueled warcraft is coming. Over the past two years, the AI industry has increasingly invested in military applications of AI and gravitated towards 'security' as one of its organizing principles. Frontier labs are embedding themselves into the national security state. For instance, in June 2024 OpenAI appointed retired U.S. Army general Paul Nakasone to its Board of Directors. In December 2024, the AI-giant announced it had partnered with defense military technology outfit Anduril to develop drone defense systems. And Google, my former employer, scoped out 'national security imperatives for the AI era' earlier this year. The technology sectors' allusions to national security and AI have a certain shape-shifting quality to them. It's not always clear whether someone is referring to defensive or offensive AI capabilities, or whether it is even possible to neatly separate the former from the latter. In the context of armed conflict, things get even muddier. The idea that a sufficiently capable AGI system might eventually pilot drones is already on the minds of military planners, but Ukraine's strike on Russia gives us a much more specific picture of what to expect. Spider's Web had been in the making for eighteen months. During this time,150 small attack drones and 300 explosive devices were smuggled into Russia to stage the attack. Rather than one large shipment, the SBU likely engaged in piecemeal smuggling to avoid detection. Possibly bringing components across borders, using front companies, or bribing officials to pass through checkpoints. The fog of war is thick. We may never know for certain, but we do know that the final drones were packed into special mobile containers that looked inconspicuous from the outside. According to reports, the drivers of the lorries all told a similar story. A businessman approached them to pick up what seemed to be wooden cabins and deliver them to various locations around Russia. They agreed and thought little of it. Once the trucks were in position, the strike was launched. At the predetermined moment, each container's roof panels were remotely opened to release a swarm of drones (likely piloted remotely by piggybacking on Russian telecommunications networks). Spider's Web offers a window into how AGI could supercharge similar attacks in the future. AGI could analyse transportation routes to find the safest, fastest, and least conspicuous way to move cargo. It could plan truck routes that avoid busy checkpoints, choose transit times when border guards are understaffed, and even account for satellite overpasses or drone surveillance. Such a system could coordinate multimodal logistics (think planes, trains and automobiles) with timing that no human team could match. Not to mention it could crunch traffic patterns, rail schedules, and weather data to find the perfect moment for an attack. This hypothetical warfighting AGI could automatically generate corporate entities complete with registration documents, tax records, and websites to serve as cover. It could forge driver's licenses, passports, and employee IDs that pass automated verification—much faster than humans today could. Aside from paperwork, an AGI could manage a whole suite of deception technologies. For example, AGI could emit fake GPS signals to confuse satellite tracking or hacking into a facility's CCTV feed to loop old footage while operatives move equipment. When it's time to strike, AGI could guide each drone to its target as part of a single unified swarm, optimised to prevent collisions and spaced to maximize coverage. AGI may even make it possible to monitor the electronic warfare environment and switch frequencies if it senses jamming on the current channel. If an air defense system starts tracking the swarm, the AGI might command all drones to disperse or drop to terrain-hugging altitude to increase their odds of survival. As soon as the destination is in range, AGI could help drones autonomously recognise target types and aim for the most damaging impact points (say by guiding a drone to the exact location of an aircraft's fuel tank). To be sure, these are still predictions about what AGI may be capable of in the future. And there will likely be limitations. Precision hand-work like soldering detonators, balancing rotors, and packing warheads remains hard to automate at scale without a bespoke factory line. Robots can do it, but you still need humans to do the initial set-up. Plus, explosives sweat, lithium-ion packs puff, and cheap FPV airframes warp if left in non-climate-controlled depots. Periodic maintenance like changing desiccant packs or swapping bloated cells would likely still remain vital. A swarm of AGI-powered drones would probably still need caretakers who can move around without drawing attention. Finally, jamming-resistant links need spectrum licences, custom SIM provisioning, or pirate base-stations smuggled in-country. Deploying that communications infrastructure (like antennae or repeaters) requires boots on the ground. But even with a heavy dose of scepticism, I find it hard to see the Ukrainian strike as anything other than a postcard from the future. Problems might look insurmountable to us, but you should never bet against the machine conjuring up an unorthodox solution. I fear that the best case scenario ahead of us is one where attacks such as these can simply be delivered slightly faster. The worst case scenario is one in which a Spider's Web-style operation can be conducted orders of magnitude faster by just a handful of people. Thinking about the implications of AGI is useful in that it reminds us that power flows to whoever can orchestrate complexity faster than the adversary can comprehend it. Complexity is the strategic currency of war in the information age, and AGI is a complexity accelerator. If AGI finds its way into the wrong hands, it could become much easier to pull off a deadly attack. That is as true for the great powers as it is for rogue actors. This is the new strategic reality, and every military has to plan for it. What Ukraine's Spider's Web strike taught us is that the hardware for an AGI warfighter is ready. All that remains is the software. Contact us at letters@
Yahoo
28 minutes ago
- Yahoo
Big AI isn't just lobbying Washington—it's joining it
Welcome to Eye on AI! In this edition…OpenAI releases report outlining efforts to block malicious use of its tools…Amazon continues its AI data center push in the South, with plans to spend $10 billion in North Carolina…Reddit sues Anthropic, accusing it of stealing data. After spending a few days in Washington, D.C. this week, it's clear that 'Big AI'—my shorthand for companies including Google, OpenAI, Meta, Anthropic, and xAI that are building and deploying the most powerful AI models—isn't just present in the nation's capital. It's being welcomed with open arms. Government agencies are eager to deploy their models, integrate their tools, and form public-private partnerships that will ultimately shape policy, national security, and global strategy inside the Beltway. And frontier AI companies, which also serve millions of consumer and business customers, are ready and willing to do business with the U.S. government. For example, just today Anthropic announced a new set of AI models tailored for U.S. national security customers, while Meta recently revealed that it's making its Llama models available to defense partners. This week, former Google CEO Eric Schmidt was a big part of bringing Silicon Valley and Washington together. I attended an AI Expo that served up his worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America's global strategy (which will be chock-full of drones and robots if he gets his way). I also dressed up for a gala event hosted by the Washington AI Network, with sponsors including OpenAI, Meta, Microsoft, and Amazon, as well as a keynote speech from U.S. Commerce Secretary Howard Lutnick. Both events felt like a parallel AI universe to this D.C. outsider: In this universe, discussions about AI are less about increasing productivity or displacing jobs, and more about technological supremacy and national survival. Winning the AI 'race' against China is front and center. Public-private partnerships are not just desirable—they're essential to help the U.S. maintain an edge in AI, cyber, and intelligence systems. I heard no references to Elon Musk and DOGE's 'move fast and break things' mode of implementing AI tools into the IRS or the Veterans Administration. There were no discussions about AI models and copyright concerns. No one was hand-wringing about Anthropic's new model blackmailing its way out of being shut down. Instead, at the AI Expo, senior leaders from the U.S. military talked about how the recent Ukrainian drone attacks on Russian air bases are prime examples of how rapidly AI is changing the battlefield. Federal procurement experts discussed how to accelerate the Pentagon's notoriously slow acquisition process to keep pace with commercial AI advances. OpenAI touted its o3 reasoning model, now deployed on a secure government supercomputer at Los Alamos National Laboratory. At the gala, Lutnick made the stakes explicit: 'We must win the AI race, the quantum race—these are not things that are open for discussion.' To that end, he added, the Trump administration is focused on building another terawatt of power to support the massive AI data centers sprouting up across the country. 'We are very, very, very bullish on AI,' he said. The audience—packed with D.C.-based policymakers and lobbyists from Big AI—applauded. Washington may not be a tech town, but if this week was any indication, Silicon Valley and the nation's capital are learning to speak the same language. Still, the growing convergence of Silicon Valley and Washington makes many observers uneasy—especially given that it's been just seven years since thousands of Google employees protested the company's involvement in a Pentagon AI project, ultimately forcing it to back out. At the time, Google even pledged not to use its AI for weapons or surveillance systems that violated 'internationally accepted norms.' On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of 'pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.' The organization says the public needs 'to reckon with the ways in which today's AI isn't just being used by us, it's being used on us.' But the parallel AI universe I witnessed—where Big AI and the D.C. establishment are fusing interests—is already realigning power and policy. The biggest question now is whether they're doing so safely, transparently, and in the public interest—or simply in their own. The race is on. With that, here's the rest of the AI news. Sharon This story was originally featured on