logo
One chilling forecast of our AI future is getting wide attention. How realistic is it?

One chilling forecast of our AI future is getting wide attention. How realistic is it?

Vox23-05-2025

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.
Let's imagine for a second that the impressive pace of AI progress over the past few years continues for a few more.
This story was first featured in the Future Perfect newsletter.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us?
Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: improving AI research. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models.
With this AI trainer's help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an 'employee' you can 'hire.' Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours).
Welcome to the (near) future
This is the opening of AI 2027, a thoughtful and detailed near-term forecast from a group of researchers that think AI's massive changes to our world are coming fast — and for which we're woefully unprepared. The authors notably include Daniel Kokotajlo, a former OpenAI researcher who became famous for risking millions of dollars of his equity in the company when he refused to sign a nondisclosure agreement.
Related AI is coming for the laptop class
'AI is coming fast' is something people have been saying for ages but often in a way that's hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it's built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we're all still around.)
The authors describe how advances in AI will be perceived, how they'll affect the stock market, how they'll upset geopolitics — and they justify those predictions in hundreds of pages of appendices. AI 2027 might end up being completely wrong, but if so, it'll be really easy to see where it went wrong.
Forecasting doomsday
It also might be right.
While I'm skeptical of the group's exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me.
Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we'll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an 'AI employee' becomes a viable alternative to a human hire for most jobs that can be done remotely.
But in this scenario, the company uses most of its new 'AI employees' internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker. We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to 'fix' them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can't fathom. This, too, has already started happening to some degree. It's common to see complaints about AIs doing 'annoying' things like faking passing code tests they don't pass.
Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen. Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won't eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects.
Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe.
By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don't want to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans.
The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up.
All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will?
Definitely. I'd argue it wouldn't even be that hard. But will they do better? After all, we've certainly failed at much easier tasks.
Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We'll see.
We live in interesting (and deeply alarming) times. I think it's highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to, and to decide what you'll want to do if you see this starting to come true.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT's Sam Altman sends strong 2-word message on the future
ChatGPT's Sam Altman sends strong 2-word message on the future

Miami Herald

time2 hours ago

  • Miami Herald

ChatGPT's Sam Altman sends strong 2-word message on the future

As the AI boom continues to take over both the tech industry and the news cycle, there's one thing that's for sure: it's scaring a lot of people. AI is a technically complex topic that can be difficult to explain to the average person, but there's one sentiment that isn't hard to explain at all: the concept that AI might take your job. Don't miss the move: Subscribe to TheStreet's free daily newsletter So rather than try to understand AI's capabilities, or why every major tech company from Meta to Google to Nvidia is pouring billions of dollars into developing it, most people are going to zero in on the part that's personally applicable to them. Related: Cathie Wood has a bold take on AI stealing your job Some voices in the tech space have tried to present an opposite take on the whole "AI making you jobless" rhetoric. Ark Invest CEO Cathie Wood said in a recent tweet, "History shows that new technologies create many more jobs than they displace. We do not think that this time will be different." OpenAI's Sam Altman is easily the AI movement's biggest figurehead, thanks to ChatGPT's runaway success. The company hit three million paid ChatGPT subscribers as of June. This proves that people are flocking to it in droves - and away from search engines. Research firm Gartner has even predicted that by 2026, traditional search engine volume will drop 25%. Now Altman has penned a blog post addressing the topic of AI and how it's changing our world. It's a refreshing take that, for once, will give you some hope about the future of your career. Altman's post emphasizes that compared to any time that has come before, the 2030s can be described with two powerful words: "wildly different." Altman offers a reality check, saying, "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it's much less weird than it seems like it should be." "We do not know how far beyond human-level intelligence we can go, but we are about to find out," he continued. More Tech Stocks: Palantir gets great news from the PentagonAnalyst has blunt words on Trump's iPhone tariff plansOpenAI teams up with legendary Apple exec The OpenAI CEO doesn't hesitate to say that his company has recently built systems that are "smarter than people in many ways, and are able to significantly amplify the output of people using them." Altman also says ChatGPT is "already more powerful than any human who has ever lived," a phrase that may feel threatening to some, considering that LLMs are not human to begin with. But Altman sees even more ahead, predicting that AI will significantly mold our future. Related: Microsoft has good news for Elon Musk, bad news for Sam Altman "In the 2030s, intelligence and energy - ideas, and the ability to make ideas happen -are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else." Altman also acknowledged that, yes, many jobs will go away as AI continues to evolve, but that won't be the end of the story. "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything," he says. "There will be very hard parts like whole classes of jobs going away, but on the other hand, the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before. We probably won't adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big." Altman also points out a key asset of humanity that AI cannot duplicate, saying, "People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don't care very much about machines." Related: OpenAI teams up with legendary Apple exec The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

‘This is coming for everyone': A new kind of AI bot takes over the web
‘This is coming for everyone': A new kind of AI bot takes over the web

Washington Post

time2 hours ago

  • Washington Post

‘This is coming for everyone': A new kind of AI bot takes over the web

People are replacing Google search with artificial intelligence tools like ChatGPT, a major shift that has unleashed a new kind of bot loose on the web. To offer users a tidy AI summary instead of Google's '10 blue links,' companies such as OpenAI and Anthropic have started sending out bots to retrieve and recap content in real time. They are scraping webpages and loading relevant content into the AI's memory and 'reading' far more content than a human ever would.

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity
Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Forbes

time2 hours ago

  • Forbes

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Speculating on the future of AI including artificial general intelligence (AGI) and artificial ... More superintelligence (ASI). In today's column, I examine a newly posted blog piece by Sam Altman that has generated quite a bit of hubbub and controversy within the AI community. As the CEO of OpenAI, Sam Altman is considered an AI luminary, of which his viewpoint on the future of AI carries an enormous amount of weight. His latest online commentary contains some eyebrow-raising indications about the current and upcoming status of AI, including aspects partially coated in AI-speak and other insider terminology that require mindful interpretation and translation. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. In a new posting on June 10, 2025, entitled 'The Gentle Singularity' by Sam Altman on his personal blog, the famed AI prognosticator made these remarks (excerpts): There's a whole lot in there to unpack. His upbeat-worded opinion piece contains commentary about many undecided considerations, such as referring to the ill-defined and indeterminate AI event horizon, the impacts of artificial superintelligence, various touted dates that suggest when we can expect things to really take off, hazy thoughts about the nature of the AI singularity, and much more. Let's briefly explore the mainstay elements. A big question facing those who are deeply into AI consists of whether we are on the right track to attain AGI and ASI. Maybe we are, maybe we aren't. Sam Altman's reference to the AI event horizon alludes to the existing pathway that we are on, and he unequivocally implies and states that in his opinion, we not only have reached the event horizon but that we are avidly past it already. As espoused, the takeoff has started. Just to note, that's a claim embodying immense boldness and brashness, and not everyone in AI concurs with that viewpoint. Consider these vital facets. First, in favor of that perspective, some insist that the advent of generative AI and large language models (LLMs) vividly demonstrates that we are now absolutely on the path toward AGI/ASI. The incredible semblance of natural language fluency exhibited by the computational capabilities of contemporary LLMs seems to be a sure sign that the road ahead must lead to AGI/ASI. However, not everyone is convinced that LLMs constitute the appropriate route. There are qualms that we already are witnessing headwinds on how much generative AI can be further extended, see my coverage at the link here. Perhaps we are nearing a severe roadblock, and continued efforts will not get us any further bang for the buck. Worse still, we might be off-target and going in the wrong direction altogether. Nobody can say for sure whether we are on the right path or not. It is a guess. Well, Sam Altman has planted a flag that we are incontrovertibly on the right path and that we've already zipped down the roadway quite a distance. Cynics might find this a self-serving perspective since it reinforces and reaffirms the direction that OpenAI is currently taking. Time will tell, as they say. Another consideration in the AI field is that perhaps there will be a kind of singularity that serves as a key point at which AGI or ASI will readily begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. For my detailed explanation of the postulated AI singularity, see the link here. Some believe that the AI singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. One moment we will be working stridently on pushing AI forward, and then, bam, the singularity occurs. It is envisioned as a type of intelligence explosion, whereby intelligence rapidly begets more intelligence. After the singularity happens, AI will be leaps and bounds better than it just was. In fact, it could be that we will have a fully complete AGI or ASI due to the singularity. One second earlier, we had plain AI, while an instant later we amazingly have AGI or ASI in our midst, like a rabbit out of a hat. Perhaps though the singularity will be a long and drawn-out activity. There are those who speculate the singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe the AI singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Additionally, there might not be anything resembling a singularity at all, and we've just concocted some zany theory that has no basis in reality. Sam Altman's posting seems to suggest that the AI singularity is already underway (or, maybe happening in 2030 or 2035) and that it will be a gradual emerging phenomenon, rather than an instantaneous one. Interesting conjecture. Right now, efforts to forecast when AGI and ASI are going to be attained are generally based on putting a finger up into prevailing AI winds and wildly gauging potential dates. Please be aware that the hypothesized dates have very little evidentiary basis to them. There are many highly vocal AI luminaires making brazen AGI/ASI date predictions. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. See my analysis of those dates at the link here. A somewhat quieter approach to the gambit of date guessing is via the use of surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Depending on how you interpret Sam Altman's latest blog post, it isn't clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI since he refers to superintelligence, which might be his way of expressing ASI or maybe AGI. There is a muddiness of differentiating AGI from ASI. Indeed, I've previously covered his changing definitions associated with AGI and ASI, i.e., moving of the cheese, at the link here. We'll know how things turned out in presumably a mere 5 to 10 years. Mark your calendars accordingly. An element of the posting that has gotten the gall of especially AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. We are in a gentle singularity. That's certainly happy news for the world at large. Utopia awaits. There is a decidedly other side to that coin. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI or x-risk. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. You can readily discern which camp the posting sides with, namely roses and fine wine. It is important to carefully assess the myriads of pronouncements and proclamations being made about the future of AI. Oftentimes, the wording appears to brazenly assert that the future is utterly known and predictable. With a sense of flair and confidence, many of these prognostications can be easily misread as somehow a bushel of facts and knowns, rather than a bundle of opinions and conjecture. Franklin D. Roosevelt wisely stated: 'There are as many opinions as there are experts.' Keep your eyes and ears open and be prudently mindful of all prophecies concerning the future of AI. You'll be immeasurably glad you were cautious and alert.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store