logo
After Reaching AGI Some Insist There Won't Be Anything Left For Humans To Teach AI About

After Reaching AGI Some Insist There Won't Be Anything Left For Humans To Teach AI About

Forbes15-05-2025

AGI is going to need to keep up with expanding human knowledge even in a post-AGI world.
In today's column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligence (AGI) there won't be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn't any ongoing need or even value in trying to train AGI on anything else.
Turns out that's hogwash (misguided) and there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on.
Let's talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won't have anything else to learn from humans. The whole kit-and-kaboodle will already be in place.
For example, if you find yourself idly interested in Einstein's theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein's famed insights. You won't need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge.
Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn't bother doing so. The AGI already knows the crucial nature that Einstein played in human existence.
Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story.
There are several false or misleading assumptions underlying the strident belief that we won't be able to teach AGI anything new.
First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge.
It isn't.
There are written records that aren't on the Internet and just haven't been digitized, or if digitized haven't been posted onto the Internet. The crux is that there will still be a lot of content that AGI won't have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content.
Second, AGI won't know what's in our heads.
I mean to say that there is knowledge we have in our noggins that isn't necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces (BMI), see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don't know when that will materialize and nor whether it will coincide with attaining AGI.
Another consideration is that time continues to flow along in a post-AGI era.
This suggests that the world will be changing and that humans will come up with new thoughts that we hadn't conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we've divined since that initial AGI launch.
Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity.
Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let's recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge.
By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI.
Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess?
One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn't possibly envision that we have anything useful for the AGI to gain from.
I am more upbeat on this posture.
I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn't especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge.
Would humans be willing to learn from AGI?
Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won't opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired.
Boom, drop the mic.
In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That's a synergy worthy of great hope and promise.
The last word on this for now goes to the legendary Henry Ford: 'Coming together is a beginning; keeping together is progress; working together is success.' If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Double-Whammy When AGI Embeds With Humanoid Robots And Occupies Both White-Collar And Blue-Collar Jobs
Double-Whammy When AGI Embeds With Humanoid Robots And Occupies Both White-Collar And Blue-Collar Jobs

Forbes

time6 hours ago

  • Forbes

Double-Whammy When AGI Embeds With Humanoid Robots And Occupies Both White-Collar And Blue-Collar Jobs

AGI will be embedded into humanoid robots, which makes white-collar and blue-collar jobs a target ... More for walking/talking automation. In today's column, I examine the highly worrisome qualms expressed that the advent of artificial general intelligence (AGI) is likely to usurp white-collar jobs. The stated concern is that since AGI will be on par with human intellect, any job that relies principally on intellectual pursuits such as typical white-collar work will be taken over via the use of AGI. Employers will realize that rather than dealing with human white-collar workers, they can more readily get the job done via AGI. This, in turn, has led to a rising call that people should aim toward blue-collar jobs, doing so because (presumably) those forms of employment will not be undercut via AGI. Sorry to say, that misses the bigger picture, namely that AGI when combined with humanoid robots is coming not only for white-collar jobs but also blue-collar jobs too. It is a proverbial double-whammy when it comes to the attainment of AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Before launching into the primary matter at hand in this discussion, let's contemplate a famous quote attributed to Charles Kettering, a legendary inventor, who said, 'A problem well-stated is a problem half-solved.' I bring this up because those loud clamors right now about the assumption that AGI will replace white-collar workers are only seeing half of the problem. The problem as they see it is that since AGI is intellectually on par with humans, and since white-collar workers mainly use intellect in their work endeavors, AGI is going to be used in place of humans for white-collar work. I will in a moment explain why that's only half of the problem and there is a demonstrative need to more carefully and fully articulate the nature of the problem. On a related facet, the belief that AGI will axiomatically replace white-collar labor makes a number of other related key assumptions. I shall briefly explore those and then come back to why the problem itself is only half-baked. The cost of using AGI for doing white-collar work will need to be presumably a better ROI choice over human workers. If not, then an employer would be wiser to stick with humans rather than employing AGI. There seems to often be an unstated belief that AGI is necessarily going to be a less costly route than employing humans. We don't know yet what the cost of using AGI will be. It could be highly expensive. Indeed, some are worried that the world will divide into the AGI haves and AGI have-nots, partially due to the exorbitant cost that AGI might involve. If AGI is free to use, well, that would seem to be the nail in the coffin related to using human workers for the same capacity. Another angle is that AGI is relatively inexpensive in comparison to human labor. In that case, the use of AGI is likely to win over human labor usage. But if the cost of AGI is nearer to the cost of human labor (all in), or more so, then employers would rationally need to weigh the use of one versus the other. Note that when referring to the cost of human labor, there is more to that calculation than simply the dollar-hour labor rate per se. There are lots of other less apparent costs, such as the cost to manage human labor, the cost of dealing with HR-related issues, and many other factors that come into the weighty matter. Thus, an AGI versus human labor ROI will be more complex than it might seem at an initial glance. In addition, keep in mind that AGI would seemingly be readily switched on and off, and have other capacities that human labor would not equally tend to allow. Assume that by and large the advent of AGI will decimate the need for white-collar human labor. The refrain right now is that people should begin tilting toward blue-collar jobs as an alternative to white-collar jobs. This is a logical form of thinking in the sense that AGI as an intellectual mechanism would be unable to compete in jobs that involve hands-on work. A plumber needs to come to your house and do hands-on work to fix your plumbing. This is a physicality that entails arriving at your physical home, physically bringing and using tools, and physically repairing your faulty home plumbing. A truck driver likewise needs to sit in the cab of a truck and drive the vehicle. These are physically based tasks. There is no getting around the fact that these are hands-on activities. Aha, yes, those are physical tasks, but that doesn't necessarily mean that only human hands can perform them. The gradual emergence of humanoid robots will provide an alternative to human hands. A humanoid robot is a type of robot that is built to resemble a human in form and function. You've undoubtedly seen those types of robots in the many online video recordings showing them walking, jumping, grasping at objects, and so on. A tremendous amount of active research and development is taking place to devise humanoid robots. They look comical right now. You watch those videos and laugh when the robot trips over a mere stick lying on the ground, something that a human would seldom trip over. You scoff when a robot tries to grasp a coffee cup and inadvertently spills most of the coffee. It all seems humorous and a silly pursuit. Keep in mind that we are all observing the development process while it is still taking place. At some point, those guffaws of the humanoid robots will lessen. Humanoid robots will be as smooth and graceful as humans. This will continue to be honed. Eventually, humanoid robots will be less prone to physical errors that humans make. In a sense, the physicality of a humanoid robot will be on par with humans, if not better, due to its mechanical properties. Do not discount the coming era of quite physically capable humanoid robots. You might remember that in The Wonderful Wizard of Oz, the fictional character known as The Strawman lacked a brain. Without seeming to anthropomorphize humanoid robots, the current situation is that those robots typically use a form of AI that is below the sophistication level of modern generative AI. That's fine for now due to the need to first ensure that the physical movements of the robots get refined. I have discussed that a said-to-be realm of Physical AI is going to be a huge breakthrough with incredible ramifications, see my analysis at the link here. The idea underlying Physical AI is that the AI of today is being uplifted by doing data training on the physical world. This also tends to include the use of World Models, consisting of broad constructions about how the physical world works, such as that we are bound to operate under conditions of gravity, and other physical laws of nature, see the link here. The bottom line here is that there will be a close pairing of robust AI with humanoid robots. Imagine what a humanoid robot can accomplish if it is paired with AGI. I'll break the suspense and point out that AGI paired with humanoid robots means that those robots readily enter the blue-collar worker realm. Suppose your plumbing needs fixing. No worries, a humanoid robot that encompasses AGI will be sent to your home. The AGI is astute enough to carry on conversations with you, and the AGI also fully operates the robot to undertake the plumbing tasks. How did the AGI-paired humanoid robot get to your home? Easy-peasy, it drove a car or truck to get there. I've previously predicted that all the work on devising autonomous vehicles and self-driving cars will get shaken up once we have suitable humanoid robots devised. There won't be a need for a vehicle to contain self-driving capabilities. A humanoid robot will simply sit in the driver's seat and drive the vehicle. This is a much more open-ended solution than having to craft components that go into and onto a vehicle to enable self-driving. See my coverage at the link here. Timing Is Notable One of the reasons that many do not give much thought to the pairing of AGI with humanoid robots is that today's humanoid robots seem extraordinarily rudimentary and incapable of performing physical dexterity tasks on par with human capabilities. Meanwhile, there is brazen talk that AGI is just around the corner. AGI is said to be within our grasp. Let's give the timing considerations a bit of scrutiny. There are three primary timing angles: A skeptic would insist that there is a fourth possibility, consisting of the possibility that we never achieve AGI and/or we fail to achieve sufficiently physically capable humanoid robots. I am going to reject that possibility. Perhaps I am overly optimistic, but it seems to me that we will eventually attain AGI, and we will eventually attain physically capable humanoid robots. I shall next respectively consider each of the three genuinely reasonable possibilities. What if we manage to attain AGI before we manage to achieve physically fluent humanoid robots? That's just fine. We would indubitably put AGI to work as a partner with humans in figuring out how we can push along the budding humanoid robot development process. It seems nearly obvious that with AGI's capable assistance, we would overcome any bottlenecks and soon enough arrive at top-notch physically adept humanoid robots. At that juncture, we would then toss AGI into the humanoid robots and have ourselves quite an amazing combination. Suppose that we devise very physically adept humanoid robots but have not yet arrived at AGI. Are we in a pickle? Nope. We could use conventional advanced AI inside those humanoid robots. The combination would certainly be good enough for a wide variety of tasks. The odds are that we would need to be cautious about where such robots are utilized. Nonetheless, we would have essentially walking, talking, and productive humanoid robots. If AGI never happens, oh well, we end up with pretty good humanoid robots. On the other hand, once we arrive at AGI, those humanoid robots will be stellar. It's just a matter of time. Let's consider the potential of AGI and humanoid robots perchance being attained around the same time. Assume that this timing isn't due to an outright cross-mixing with each other. They just so happen to advance on a similar timeline. I tend to believe that's the most likely of the three scenarios. Here's why. First, despite all the hubris about AGI being within earshot, perhaps in the next year or two, which is a popular pronouncement by many AI luminaries, I tend to side with recent surveys of AI developers that put the date around the year 2040 (see my coverage at the link here). Some AI luminaires sneakily play with the definition of AGI in hopes of making their predictions come true sooner, akin to moving the goalposts to easily score points. For my coverage on Sam Altman's efforts of moving the cheese regarding AGI attainment, see the link here. Second, if you are willing to entertain the year 2040 as a potential date for achieving AGI, that's about 15 years from now. In my estimation, the advancements being made in humanoid robots will readily progress such that by 2040 they will be very physically adept. Probably be sooner, but let's go with the year 2040 for ease of contemplation. In my view, we will likely have humanoid robots doing well enough that they will be put into use prior to arriving at AGI. The pinnacle of robust humanoid robots and the attainment of AGI will roughly coincide with each other. Two peas in a pod. In an upcoming column posting, I will examine the enormous consequences of having AGI paired with fully physically capable humanoid robots. As noted above, this will have a humongous impact on white-collar work and blue-collar work. There will be gargantuan economic impacts, societal impacts, cultural impacts, and so on. Some final thoughts for now. A single whammy is already being hotly debated. The debates currently tend to be preoccupied with the loss of white-collar jobs due to the attainment of AGI. A saving grace seems to be that at least blue-collar jobs are going to be around and thriving, even once AGI is attained. The world doesn't seem overly gloomy if you can cling to the upbeat posture that blue-collar tasks remain intact. The double whammy is a lot more to take in. But the double whammy is the truth. The truth needs to be faced. If you are having doubts as a human about the future, just remember the famous words of Vince Lombardi: 'Winners never quit, and quitters never win.' Humankind can handle the double whammy. Stay tuned for my upcoming coverage of what this entails.

Famed Short Seller Jim Chanos Is Betting Against Used Car Retailer Carvana And AI Losers Like IBM
Famed Short Seller Jim Chanos Is Betting Against Used Car Retailer Carvana And AI Losers Like IBM

Forbes

time16 hours ago

  • Forbes

Famed Short Seller Jim Chanos Is Betting Against Used Car Retailer Carvana And AI Losers Like IBM

Jim Chanos (left) speaks with Forbes editor Matt Schifrin on stage at the 2025 Forbes Iconoclast Summit. When legendary investor Jim Chanos entered the stage at the 2025 Forbes Iconoclast Summit Thursday, he had a big smile on his face thanks to the very public feud that had erupted behind President Donald Trump and the world's richest person, Tesla CEO Elon Musk. Chanos, who first shorted Tesla stock back in 2016, said of the spat: 'Most predictable breakup ever.' With a long and successful track record of betting against overpriced or fraudulent companies, Chanos' latest big short is used-car retailer Carvana. Just as intriguing, he's also looking to short companies that will lose out in the race for artificial intelligence. 'Carvana is a misunderstood story: The Street believes it is an epic turnaround, but in fact, the company is still losing money,' said Chanos. 'Although it is priced as a growth stock, the business is cyclical.' He pointed, for instance, to how, in mid-2023, its core business dropped by more than 30% year-over-year, when there was a slight slowdown in the economy. Chanos noted the Street's is focusing on the company's gross profit margins. But, he argued, those margins are a product of aggressive accounting that inflates both unit economics and corporate profitability while excluding many components that other auto dealers typically include. 'Carvana is making all this money in finance, not selling cars,' explained Chanos. 'They are a subprime lender.' Carvana has booked big gains on selling subprime loans to affiliates and non affiliates, which represent 122% of their income in the latest quarter. Carvana's stock has a history of wild swings. The company originally went public in 2017 at $15 per share, raising $225 million. A pandemic-era darling, the stock hit an all-time high of around $370 per share in early 2021 during the auto-supply chain crisis, but tanked shortly after and by 2022 had fallen to below $5 per share amid reports that the company was on the brink of bankruptcy following its $2.2 billion acquisition of auction business ADESA. The following year, with its stock still languishing in the single digits, the company announced a debt and equity raise in a move that S&P Global called, 'tantamount to default.' By mid-2024, the stock had rallied back above $100 per share and continued to rise, hitting roughly $250 by the end of the year. So far this year, Carvana is up 70% to $343 per share, rallying massively since a low point of $162 per share in early April when the market tanked following President Trump's tariff announcements. The bull case behind the most recent rally: Auto tariffs will hurt new car sales, giving Carvana more market share. Now, in June 2025, the company's enterprise value has hit $73 billion, near an all-time high. One good indicator of a looming correction, according to Chanos, is that the short interest on Carvana stock is back down to multi-year lows of below 10%, a marked turnaround from being one of the most heavily shorted stocks in the market during 2023. 'Perhaps more ominous is that insiders have begun to sell an absolute torrent of stock—pretty much the whole C-suite is getting out seemingly as fast as they can,' he said. 'This is really a company at the end of the whip for retail consumers.' Some $1.7 billion worth of stock was sold by company executives in May, filings show. Chanos famously predicted energy company Enron's collapse in 2001, shorting the company months before a large accounting scandal destroyed investor confidence and led to bankruptcy. He also gained fame with profitable bets against payments company Wirecard, which in 2020 admitted to billions of euros on its books that 'did not exist' and resulted in insolvency. Not all of his bets have had such success, however. Chanos famously took a short position against electric vehicle maker Tesla in 2016, but reduced the position in 2020 before the company was added to the S&P 500 index. In the following years, his other short positions included Live Nation, International Business Machines, Block and General Electric, among others. None matched his Enron and Wirecard successes. In late 2023, Chanos closed down the hedge funds he managed that shorted overpriced or fraudulent companies, citing dwindling assets and a long/short equity business model that was increasingly coming under pressure. Converting it into a family office, he now no longer manages capital but continues to advise clients at Chanos & Co., the firm he founded in 1985 as Kynikos Associates. Chanos also told the Iconoclast audience that he is shorting several companies that have benefitted from the hype around AI but will lose out in the future, such as 'IT body shops' like IBM. He likened this investing theme to the dot-com bubble, which burst in 2000. Back then, he successfully shorted companies he believed wouldn't survive the technological transition, such as Blockbuster Video (the pre-Netflix movie rental chain) and Kodak (the film and camera pioneer that went bankrupt after the transition to digital photography). 'We were short almost all those types of companies back then,' Chanos said. 'We're doing our work now but we think there are a number of companies that similarly pop up and are going to be roadkill on the AI highway.' IT consultants and body shops will see their businesses melt away, he predicted. 'We're puzzled by some of them because they are getting valuations similar to AI type companies when even a cursory look at their business model shows they might be in a lot of trouble,'' he added. One of Chanos' older names within that group is IBM, which he has been short on and off since 2020. 'The company is not growing but trading at all-time high valuations,' he said, noting that the 'antiquated software' it maintains for its clients will likely become obsolete.

Ukraine Just Demonstrated What AGI War Could Look Like
Ukraine Just Demonstrated What AGI War Could Look Like

Yahoo

time17 hours ago

  • Yahoo

Ukraine Just Demonstrated What AGI War Could Look Like

Engineers test drones at drone manufacturing facility in Odessa, Ukraine on June 01, 2025. The Ukrainian forces are producing new drones and electronic warfare systems. The facility is currently testing advanced models, including self-propelled land-based drone systems. Credit - Maksim Voytenko—Anadolu/Getty Images Bombers in flames on social media. Photos of trick shipping containers packed with drones. Defiant statements from both sides about the scale of the damage. On June 1, Ukraine targeted several Russian air bases using first-person view (FPV) drones, cheap aerial vehicles which are remotely operated by pilots using camera feeds. According to reports, Ukraine used machine-learning algorithms to guide the drones to the target area. The attack, dubbed 'Spider's Web', demonstrated the current hardware capabilities of modern warfare. And as companies and governments race to develop Artificial General Intelligence (AGI)—in which advanced artificial intelligence systems can quickly understand, learn, and apply knowledge—the operation also provides a glimpse into what the future of warfare could look like. The Security Service of Ukraine's (SBU) operation knocked out targets up to 8,000 kilometers (nearly 5,000 miles) from the frontlines. As the dust settles, analysts are starting to wonder whether anywhere is truly beyond the reach of FPV drones. Some reports suggest dozens of strategic bombers (some said to be capable of delivering nuclear weapons) were destroyed or disabled by 117 FPV drones, though Moscow countered that only a handful of planes were struck. Western assessments put the figure at no lower than 10. But the scale of the attack, while impressive, isn't its most remarkable aspect. After all, the operation follows a Russian drone attack at the end of May involving almost 500 unmanned aerial vehicles. Ukraine's attack may have been smaller, but it more than made up for it in logistical brilliance. First, the parts were smuggled into the country and the drones were assembled. Vasyl Maliuk, the head of the SBU tells the BBC that they were then loaded onto lorries with secret compartments and driven by unsuspecting Russian couriers to locations near air bases. When the shipments reached their destination, roofs on the lorries retracted to reveal the hidden hardware. And the drones took off. Spider's Web depended on three distinct but related capabilities: logistics to deliver the drones, deception to keep them hidden, and coordination to pilot dozens of them concurrently. Yes, the attack confirms that expendable drones are the weapon of the 21st century. But Ukraine's strike serves as a visceral example of how AGI will work as a warfighting tool—and how humans will work alongside AGI. Make no mistake, AGI-fueled warcraft is coming. Over the past two years, the AI industry has increasingly invested in military applications of AI and gravitated towards 'security' as one of its organizing principles. Frontier labs are embedding themselves into the national security state. For instance, in June 2024 OpenAI appointed retired U.S. Army general Paul Nakasone to its Board of Directors. In December 2024, the AI-giant announced it had partnered with defense military technology outfit Anduril to develop drone defense systems. And Google, my former employer, scoped out 'national security imperatives for the AI era' earlier this year. The technology sectors' allusions to national security and AI have a certain shape-shifting quality to them. It's not always clear whether someone is referring to defensive or offensive AI capabilities, or whether it is even possible to neatly separate the former from the latter. In the context of armed conflict, things get even muddier. The idea that a sufficiently capable AGI system might eventually pilot drones is already on the minds of military planners, but Ukraine's strike on Russia gives us a much more specific picture of what to expect. Spider's Web had been in the making for eighteen months. During this time,150 small attack drones and 300 explosive devices were smuggled into Russia to stage the attack. Rather than one large shipment, the SBU likely engaged in piecemeal smuggling to avoid detection. Possibly bringing components across borders, using front companies, or bribing officials to pass through checkpoints. The fog of war is thick. We may never know for certain, but we do know that the final drones were packed into special mobile containers that looked inconspicuous from the outside. According to reports, the drivers of the lorries all told a similar story. A businessman approached them to pick up what seemed to be wooden cabins and deliver them to various locations around Russia. They agreed and thought little of it. Once the trucks were in position, the strike was launched. At the predetermined moment, each container's roof panels were remotely opened to release a swarm of drones (likely piloted remotely by piggybacking on Russian telecommunications networks). Spider's Web offers a window into how AGI could supercharge similar attacks in the future. AGI could analyse transportation routes to find the safest, fastest, and least conspicuous way to move cargo. It could plan truck routes that avoid busy checkpoints, choose transit times when border guards are understaffed, and even account for satellite overpasses or drone surveillance. Such a system could coordinate multimodal logistics (think planes, trains and automobiles) with timing that no human team could match. Not to mention it could crunch traffic patterns, rail schedules, and weather data to find the perfect moment for an attack. This hypothetical warfighting AGI could automatically generate corporate entities complete with registration documents, tax records, and websites to serve as cover. It could forge driver's licenses, passports, and employee IDs that pass automated verification—much faster than humans today could. Aside from paperwork, an AGI could manage a whole suite of deception technologies. For example, AGI could emit fake GPS signals to confuse satellite tracking or hacking into a facility's CCTV feed to loop old footage while operatives move equipment. When it's time to strike, AGI could guide each drone to its target as part of a single unified swarm, optimised to prevent collisions and spaced to maximize coverage. AGI may even make it possible to monitor the electronic warfare environment and switch frequencies if it senses jamming on the current channel. If an air defense system starts tracking the swarm, the AGI might command all drones to disperse or drop to terrain-hugging altitude to increase their odds of survival. As soon as the destination is in range, AGI could help drones autonomously recognise target types and aim for the most damaging impact points (say by guiding a drone to the exact location of an aircraft's fuel tank). To be sure, these are still predictions about what AGI may be capable of in the future. And there will likely be limitations. Precision hand-work like soldering detonators, balancing rotors, and packing warheads remains hard to automate at scale without a bespoke factory line. Robots can do it, but you still need humans to do the initial set-up. Plus, explosives sweat, lithium-ion packs puff, and cheap FPV airframes warp if left in non-climate-controlled depots. Periodic maintenance like changing desiccant packs or swapping bloated cells would likely still remain vital. A swarm of AGI-powered drones would probably still need caretakers who can move around without drawing attention. Finally, jamming-resistant links need spectrum licences, custom SIM provisioning, or pirate base-stations smuggled in-country. Deploying that communications infrastructure (like antennae or repeaters) requires boots on the ground. But even with a heavy dose of scepticism, I find it hard to see the Ukrainian strike as anything other than a postcard from the future. Problems might look insurmountable to us, but you should never bet against the machine conjuring up an unorthodox solution. I fear that the best case scenario ahead of us is one where attacks such as these can simply be delivered slightly faster. The worst case scenario is one in which a Spider's Web-style operation can be conducted orders of magnitude faster by just a handful of people. Thinking about the implications of AGI is useful in that it reminds us that power flows to whoever can orchestrate complexity faster than the adversary can comprehend it. Complexity is the strategic currency of war in the information age, and AGI is a complexity accelerator. If AGI finds its way into the wrong hands, it could become much easier to pull off a deadly attack. That is as true for the great powers as it is for rogue actors. This is the new strategic reality, and every military has to plan for it. What Ukraine's Spider's Web strike taught us is that the hardware for an AGI warfighter is ready. All that remains is the software. Contact us at letters@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store