
One chilling forecast of our AI future is getting wide attention. How realistic is it?
is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.
Let's imagine for a second that the impressive pace of AI progress over the past few years continues for a few more.
This story was first featured in the Future Perfect newsletter.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us?
Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: improving AI research. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models.
With this AI trainer's help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an 'employee' you can 'hire.' Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours).
Welcome to the (near) future
This is the opening of AI 2027, a thoughtful and detailed near-term forecast from a group of researchers that think AI's massive changes to our world are coming fast — and for which we're woefully unprepared. The authors notably include Daniel Kokotajlo, a former OpenAI researcher who became famous for risking millions of dollars of his equity in the company when he refused to sign a nondisclosure agreement.
Related AI is coming for the laptop class
'AI is coming fast' is something people have been saying for ages but often in a way that's hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it's built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we're all still around.)
The authors describe how advances in AI will be perceived, how they'll affect the stock market, how they'll upset geopolitics — and they justify those predictions in hundreds of pages of appendices. AI 2027 might end up being completely wrong, but if so, it'll be really easy to see where it went wrong.
Forecasting doomsday
It also might be right.
While I'm skeptical of the group's exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me.
Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we'll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an 'AI employee' becomes a viable alternative to a human hire for most jobs that can be done remotely.
But in this scenario, the company uses most of its new 'AI employees' internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker. We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to 'fix' them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can't fathom. This, too, has already started happening to some degree. It's common to see complaints about AIs doing 'annoying' things like faking passing code tests they don't pass.
Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen. Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won't eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects.
Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe.
By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don't want to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans.
The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up.
All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will?
Definitely. I'd argue it wouldn't even be that hard. But will they do better? After all, we've certainly failed at much easier tasks.
Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We'll see.
We live in interesting (and deeply alarming) times. I think it's highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to, and to decide what you'll want to do if you see this starting to come true.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
an hour ago
- CNBC
Trump's AI czar downplays risk AI chip exports could be smuggled
White House AI czar David Sacks on Tuesday downplayed the risk that coveted American AI chips could be smuggled to bad actors, and expressed concern that regulating U.S. AI too tightly could stifle growth and cede the critical market to China. "We talk about these chips like they could be smuggled in the back of a briefcase. That's not what they look like. These are server racks that are eight feet tall and weigh two tons," Sacks said at the AWS summit in Washington."They don't walk out doors. It's very easy to basically verify that they're where they're supposed to be," he said. The comments indicated President Donald Trump's approach to AI could be centered on expanding markets abroad for U.S. AI chips and models. Former President Joe Biden had emphasized policies that countered risks the chips could be diverted to China and used to bolster Beijing's military."I do worry we're on a trajectory where fear could overtake opportunity and we end up sort of crippling this wonderful progress that we're seeing," Sacks said, citing a raft of bills in state legislatures seeking to regulate AI, as well as permitting challenges facing companies seeking to build the data centers that power AI. Trump rescinded Biden's executive order aimed at promoting competition, protecting consumers and ensuring AI was not used for misinformation. He also rescinded Biden's so-called AI diffusion rule, which capped the amount of American AI computing capacity that some countries were allowed to obtain via U.S. AI chip imports."We rescinded that Biden diffusion rule, diffusion a bad word. Diffusion of our technology should be a good word," Sacks said. The Trump administration and the United Arab Emirates also announced a plan last month for the Gulf country to build the largest artificial intelligence campus outside the U.S. after Biden in 2023 put in place rules that curbed most AI chip shipments to the aim at that regulation, Sacks said, "What play are we giving them? We're basically going to push them into the arms of China."He added that if, in five years, AI chips made by sanctioned Chinese telecoms equipment giant Huawei were everywhere, "that means we can't let that happen."The need to remove hurdles to U.S. AI innovation is urgent as China has made important advances in its AI models, Sacks said. This year, the Chinese AI app DeepSeek shocked the world with its sophisticated, affordably trained model. "China is not years and years behind us in AI. Maybe they're three to six months," said Sacks. "It's a very close race." The White House later said he was referring to China's AI models, adding that Chinese AI chips are one to two years behind their U.S. counterparts.
Yahoo
an hour ago
- Yahoo
AI storage platform Vast Data aimed for $25B valuation in new round, sources say
Vast Data, which offers an AI-friendly data storage platform, is in the market to raise a new round at a giant leap in valuation. Earlier this year, the 9-year-old company was seeking a valuation of around $25 billion, according to a person familiar with the deal. Should it achieve that, it would be a massive jump from its $9 billion Series E valuation secured in December 2023. The deal was not finalized, and terms -- including its valuation -- could change, this person said, adding that the requested valuation was high at the time, despite impressive growth. Many VCs are interested in and watching Vast, other sources tell TechCrunch. Vast didn't respond to a request for comment. Vast Data offers data management software coupled with unified CPU, GPU, and data hardware from vendors like Supermicro, HPE, and Cisco. Whereas old-school data storage options rely on tiers (low-cost storage options for long-term storage, higher-end options for more frequently used data), Vast aims to eliminate such tiers. It is particularly aimed at flash storage. AI has been a boon to Vast's business. The company's platform stores structured, semi-structured, and unstructured data in one place, which accelerates data retrieval and, it says, reduces the cost of model training and inference. The company's customers include large enterprises such as Pixar, ServiceNow, and xAI, as well as next-generation AI cloud providers like CoreWeave and Lambda, which use Vast's technology to offer storage capabilities to their end users. Vast had annual recurring revenue (ARR) of $200 million when it raised its Series E about 18 months ago, TechCrunch reported. The company has been growing at 2.5x to 3x year-over-year, Renen Hallak, Vast's CEO and co-founder, said on a podcast last May. The company has also been free cash flow positive for four years, Hallak said. On data storage capabilities, Vast competes with 16-year-old publicly traded Pure Storage that has a market capitalization of nearly $17 billion, and 12-year-old Weka, which last year raised a $140 million round at a $1.6 billion valuation. Vast is also developing a database architecture that is competitive with Databricks' offering. Prior to the round it is currently working on, the company has raised a total of $381 million from investors, including Fidelity Management & Research Company, NEA, BOND Capital, and Drive Capital. Sign in to access your portfolio
Yahoo
2 hours ago
- Yahoo
ChatGPT down for almost 2K users, with OpenAI reporting high ‘error rates'
Almost 2,000 ChatGPT users reported that the artificial intelligence website service was down on Tuesday. At 9:49 a.m., 1,964 users reported that the website was down on Downdetector. OpenAI confirmed that there was a problem, after 'engineers have identified the root cause and are working as fast as possible to fix the issue,' according to a company statement. 'We are observing elevated error rates and latency across ChatGPT and the API,' the statement read. The chatbot also gave users a message that said it could not answer questions. The message read, 'Too many concurrent requests.' OpenAI first started investigating an issue at 1:36 a.m., then identified the issue at 8:03 a.m. By 10:06 a.m., the issue was patched up for API and engineers continued to monitor ChatGPT as of 3:34 p.m. 'We have seen a full recovery in the API,' the latest update read. 'We are monitoring and working towards a full recovery of ChatGPT.' By 4:03 p.m., Downdetector had 144 reports of ChatGPT not working for users. Springfield grants $3.5M for 19 preservation projects, rejects 1 housing request Bankruptcy protection ends for ESG Clean Energy, Holyoke generating plant linked to Scuderi engine Coast Guard searches for days-overdue fishing boat last seen in Cape Cod Bay Amherst Cinema presents free short film festival in celebration of Juneteenth 8 Mass. residents accused of stealing nearly $9 million in federal tax refunds Read the original article on MassLive.