logo
One chilling forecast of our AI future is getting wide attention. How realistic is it?

One chilling forecast of our AI future is getting wide attention. How realistic is it?

Vox23-05-2025

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.
Let's imagine for a second that the impressive pace of AI progress over the past few years continues for a few more.
This story was first featured in the Future Perfect newsletter.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us?
Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: improving AI research. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models.
With this AI trainer's help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an 'employee' you can 'hire.' Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours).
Welcome to the (near) future
This is the opening of AI 2027, a thoughtful and detailed near-term forecast from a group of researchers that think AI's massive changes to our world are coming fast — and for which we're woefully unprepared. The authors notably include Daniel Kokotajlo, a former OpenAI researcher who became famous for risking millions of dollars of his equity in the company when he refused to sign a nondisclosure agreement.
Related AI is coming for the laptop class
'AI is coming fast' is something people have been saying for ages but often in a way that's hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it's built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we're all still around.)
The authors describe how advances in AI will be perceived, how they'll affect the stock market, how they'll upset geopolitics — and they justify those predictions in hundreds of pages of appendices. AI 2027 might end up being completely wrong, but if so, it'll be really easy to see where it went wrong.
Forecasting doomsday
It also might be right.
While I'm skeptical of the group's exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me.
Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we'll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an 'AI employee' becomes a viable alternative to a human hire for most jobs that can be done remotely.
But in this scenario, the company uses most of its new 'AI employees' internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker. We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to 'fix' them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can't fathom. This, too, has already started happening to some degree. It's common to see complaints about AIs doing 'annoying' things like faking passing code tests they don't pass.
Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen. Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won't eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects.
Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe.
By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don't want to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans.
The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up.
All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will?
Definitely. I'd argue it wouldn't even be that hard. But will they do better? After all, we've certainly failed at much easier tasks.
Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We'll see.
We live in interesting (and deeply alarming) times. I think it's highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to, and to decide what you'll want to do if you see this starting to come true.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ahmedabad Air India crash: Here's how 1 person (Ramesh Kumar) survived the tragic plane crash
Ahmedabad Air India crash: Here's how 1 person (Ramesh Kumar) survived the tragic plane crash

Business Upturn

time21 minutes ago

  • Business Upturn

Ahmedabad Air India crash: Here's how 1 person (Ramesh Kumar) survived the tragic plane crash

In a rare and miraculous escape, Ramesh Kumar, a passenger on the ill-fated Air India flight AI-171, survived the devastating crash that occurred shortly after takeoff from Ahmedabad airport on Thursday. The London-bound flight, carrying 242 passengers and crew, crashed into a densely populated residential area, causing multiple fatalities and injuries. Kumar, seated on 11A, reportedly jumped out of the aircraft through the emergency exit moments before the crash. Ahmedabad Police Commissioner GS Malik confirmed, 'The police found one survivor in seat 11A. He is currently undergoing treatment at a hospital.' In response to the tragedy, the Tata Group, which owns Air India, has extended heartfelt condolences and announced a comprehensive support package for affected families. Chairman of Tata Sons, N Chandrasekaran, said, 'No words can adequately express the grief we feel at this moment. Our thoughts and prayers are with the families who have lost their loved ones, and with those who have been injured.' As part of the relief measures, the Tata Group has pledged ₹1 crore compensation for the families of each victim. The conglomerate will also bear all medical expenses for the injured, ensuring access to proper treatment and rehabilitation. Additionally, Tata has committed to supporting the development of hostel infrastructure at B J Medical College in Ahmedabad, strengthening the region's healthcare education system. Rescue operations are ongoing, and authorities continue to provide assistance on the ground. Ahmedabad Plane Crash Aman Shukla is a post-graduate in mass communication . A media enthusiast who has a strong hold on communication ,content writing and copy writing. Aman is currently working as journalist at

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms
Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

Yahoo

time22 minutes ago

  • Yahoo

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

Meta has sued the maker of a popular AI 'nudify' app, Crush AI, that reportedly ran thousands of ads across Meta's platforms. In addition to the lawsuit, Meta says it's taking new measures to crack down on other apps like Crush AI. In a lawsuit filed in Hong Kong, Meta alleged Joy Timeline HK, the entity behind Crush AI, attempted to circumvent the company's review process to distribute ads for AI nudify services. Meta said in a blog post that it repeatedly removed ads by the entity for violating its policies, but claims Joy Timeline HK continued to place additional ads anyway. Crush AI, which uses generative AI to make fake, sexually explicit images of real people without their consent, reportedly ran more than 8,000 ads for its 'AI undresser' services on Meta's platform in the first two weeks of 2025, according to the author of the Faked Up newsletter, Alexios Mantzarlis. In a January report, Mantzarlis claimed that Crush AI's websites received roughly 90% of their traffic from either Facebook or Instagram, and that he flagged several of these websites to Meta. Crush AI reportedly evaded Meta's ad review processes by setting up dozens of advertiser accounts and frequently changed domain names. Many of Crush AI's advertiser accounts, according to Mantzarlis, were named 'Eraser Annyone's Clothes' followed by different numbers. At one point, Crush AI even had a Facebook page promoting its service. Facebook and Instagram are hardly the only platforms dealing with such challenges. As social media companies like X and Meta race to add generative AI to their apps, they've also struggled to moderate how AI tools can make their platforms unsafe for users, particularly minors. Researchers have found that links to AI undressing apps soared in 2024 on platforms like X and Reddit, and on YouTube, millions of people were reportedly served ads for such apps. In response to this growing problem, Meta and TikTok have banned keyword searches for AI nudify apps, but getting these services off their platforms entirely has proven challenging. In a blog post, Meta said it has developed new technology to specifically identify ads for AI nudify or undressing services 'even when the ads themselves don't include nudity.' The company said it is now using matching technology to help find and remove copycat ads more quickly, and has expanded the list of terms, phrases and emoji that are flagged by its systems. Meta said it is also applying the tactics it has traditionally used to disrupt networks of bad actors to these new networks of accounts running ads for AI nudify services. Since the start of 2025, Meta said, it has disrupted four separate networks promoting these services. Outside of its apps, the company said it will begin sharing information about AI nudify apps through Tech Coalition's Lantern program, a collective effort between Google, Meta, Snap and other companies to prevent child sexual exploitation online. Meta says it has provided more than 3,800 unique URLs with this network since March. On the legislative front, Meta said it would 'continue to support legislation that empowers parents to oversee and approve their teens' app downloads.' The company previously supported the US Take It Down Act, and said it's now working with lawmakers to implement it. Sign in to access your portfolio

Investment CEO Tells Convention Audience That 60 Percent of Them Will Be Unemployed Next Year Due to AI
Investment CEO Tells Convention Audience That 60 Percent of Them Will Be Unemployed Next Year Due to AI

Yahoo

time23 minutes ago

  • Yahoo

Investment CEO Tells Convention Audience That 60 Percent of Them Will Be Unemployed Next Year Due to AI

Although hundreds of billions of dollars have been poured into AI development, nearly 75 percent of businesses have failed to deliver the return on investment promised to them. The hyped up tech is notoriously buggy and in some ways now actually getting worse, with project failure rates on the rise. Despite staring into the maw of a colossal money pit, tech CEOs are doubling down, announcing plans to increase spending on AI development, going as far as laying off armies of workers to cut down on expenditures. And while some investors footing the bill for big tech's AI bacchanalia are starting to wonder when they'll see cash start trickling back into their pockets, private equity billionaire Robert Smith isn't one of them. Speaking at the SuperReturn conference in Berlin last week, Smith told a crowd of 5,500 of his fellow ultrarich investors that at least 60 percent of them would be out on the street within a year thanks to the power of AI. "We think that next year, 40 percent of the people at this conference will have an AI agent and the remaining 60 percent will be looking for work," Smith lectured. "There are 1 billion knowledge workers on the planet today and all of those jobs will change. I'm not saying they'll all go away, but they will all change." "You will have hyperproductive people in organizations, and you will have people who will need to find other things to do," the investor ominously intoned. Smith was speaking primarily about "AI agents," a vague sales term that mostly seems to mean "large language model that can complete tasks on its own." For example, OpenAI rolled out a research "Operator" that was supposed to help compile research from all over the web into detailed analytical reports earlier this year. There's only one issue with the billionaire's prediction — AI agents so far remain absolutely awful at doing all but the simplest tasks, and there's little indication the industry is about to rapidly revolutionize their potential anytime soon. (OpenAI's Operator is no exception, often conflating internet rumor with scholarly fact.) Meanwhile in the real world, a growing number of businesses that rushed to replace workers with AI agents, like the financial startup Klarna, have now come to regret their decision as it largely blows up in their faces. It doesn't take an AI agent to scrape together another explanation for Smith's absurd claim. His private equity fund, Vista Equity Partners, is among the largest in the world. Dealing almost exclusively in software and tech, Smith has a cozy relationship with OpenAI CEO Sam Altman, and just raised $20 billion for AI spending — its largest fund to date. Now responsible for billions of dollars in investments that are tied down to a disappointing AI industry, it's really just a matter of time for Smith before his claims either pay out — or the chickens come home to roost. More on AI: Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store