
This A.I. Forecast Predicts Storms Ahead
The year is 2027. Powerful artificial intelligence systems are becoming smarter than humans, and are wreaking havoc on the global order. Chinese spies have stolen America's A.I. secrets, and the White House is rushing to retaliate. Inside a leading A.I. lab, engineers are spooked to discover that their models are starting to deceive them, raising the possibility that they'll go rogue.
These aren't scenes from a sci-fi screenplay. They're scenarios envisioned by a nonprofit in Berkeley, Calif., called the A.I. Futures Project, which has spent the past year trying to predict what the world will look like over the next few years, as increasingly powerful A.I. systems are developed.
The project is led by Daniel Kokotajlo, a former OpenAI researcher who left the company last year over his concerns that it was acting recklessly.
While at OpenAI, where he was on the governance team, Mr. Kokotajlo wrote detailed internal reports about how the race for artificial general intelligence, or A.G.I. — a fuzzy term for human-level machine intelligence — might unfold. After leaving, he teamed up with Eli Lifland, an A.I. researcher who had a track record of accurately forecasting world events. They got to work trying to predict A.I.'s next wave.
The result is 'AI 2027,' a report and website released this week that describes, in a detailed fictional scenario, what could happen if A.I. systems surpass human-level intelligence — which the authors expect to happen in the next two to three years.
'We predict that A.I.s will continue to improve to the point where they're fully autonomous agents that are better than humans at everything by the end of 2027 or so,' Mr. Kokotajlo said in a recent interview.
There's no shortage of speculation about A.I. these days. San Francisco has been gripped by A.I. fervor, and the Bay Area's tech scene has become a collection of warring tribes and splinter sects, each one convinced that it knows how the future will unfold.
Some A.I. predictions have taken the form of a manifesto, such as 'Machines of Loving Grace,' an 14,000-word essay written last year by Dario Amodei, the chief executive of Anthropic, or 'Situational Awareness,' a report by the former OpenAI researcher Leopold Aschenbrenner that was widely read in policy circles.
The people at the A.I. Futures Project designed theirs as a forecast scenario — essentially, a piece of rigorously researched science fiction that uses their best guesses about the future as plot points. The group spent nearly a year honing hundreds of predictions about A.I. Then, they brought in a writer — Scott Alexander, who writes the blog Astral Codex Ten — to help turn their forecast into a narrative.
'We took what we thought would happen and tried to make it engaging,' Mr. Lifland said.
Critics of this approach might argue that fictional A.I. stories are better at spooking people than educating them. And some A.I. experts will no doubt object to the group's central claim that artificial intelligence will overtake human intelligence.
Ali Farhadi, the chief executive of the Allen Institute for Artificial Intelligence, an A.I. lab in Seattle, reviewed the 'AI 2027' report and said he wasn't impressed.
'I'm all for projections and forecasts, but this forecast doesn't seem to be grounded in scientific evidence, or the reality of how things are evolving in A.I.,' he said.
There's no question that some of the group's views are extreme. (Mr. Kokotajlo, for example, told me last year that he believed there was a 70 percent chance that A.I. would destroy or catastrophically harm humanity.) And Mr. Kokotajlo and Mr. Lifland both have ties to Effective Altruism, another philosophical movement popular among tech workers that has been making dire warnings about A.I. for years.
But it's also worth noting that some of Silicon Valley's largest companies are planning for a world beyond A.G.I., and that many of the crazy-seeming predictions made about A.I. in the past — such as the view that machines would pass the Turing Test, a thought experiment that determines whether a machine can appear to communicate like a human — have come true.
In 2021, the year before ChatGPT launched, Mr. Kokotajlo wrote a blog post titled 'What 2026 Looks Like,' outlining his view of how A.I. systems would progress. A number of his predictions proved prescient, and he became convinced that this kind of forecasting was valuable, and that he was good at it.
'It's an elegant, convenient way to communicate your view to other people,' he said.
Last week, Mr. Kokotajlo and Mr. Lifland invited me to their office — a small room in a Berkeley co-working space called Constellation, where a number of A.I. safety organizations hang a shingle — to show me how they operate.
Mr. Kokotajlo, wearing a tan military-style jacket, grabbed a marker and wrote four abbreviations on a large whiteboard: SC > SAR > SIAR > ASI. Each one, he explained, represented a milestone in A.I. development.
First, he said, sometime in early 2027, if current trends hold, A.I. will be a superhuman coder. Then, by mid-2027, it will be a superhuman A.I. researcher — an autonomous agent that can oversee teams of A.I. coders and make new discoveries. Then, in late 2027 or early 2028, it will become a superintelligent A.I. researcher — a machine intelligence that knows more than we do about building advanced A.I., and can automate its own research and development, essentially building smarter versions of itself. From there, he said, it's a short hop to artificial superintelligence, or A.S.I., at which point all bets are off.
If all of this sounds fantastical … well, it is. Nothing remotely like what Mr. Kokotajlo and Mr. Lifland are predicting is possible with today's A.I. tools, which can barely order a burrito on DoorDash without getting stuck.
But they are confident that these blind spots will shrink quickly, as A.I. systems become good enough at coding to accelerate A.I. research and development.
Their report focuses on OpenBrain, a fictional A.I. company that builds a powerful A.I. system known as Agent-1. (They decided against singling out a particular A.I. company, instead creating a composite out of the leading American A.I. labs.)
As Agent-1 gets better at coding, it begins to automate much of the engineering work at OpenBrain, which allows the company to move faster and helps build Agent-2, an even more capable A.I. researcher. By late 2027, when the scenario ends, Agent-4 is making a year's worth of A.I. research breakthroughs every week, and threatens to go rogue.
I asked Mr. Kokotajlo what he thought would happen after that. Did he think, for example, that life in the year 2030 would still be recognizable? Would the streets of Berkeley be filled with humanoid robots? People texting their A.I. girlfriends? Would any of us have jobs?
He gazed out the window, and admitted that he wasn't sure. If the next few years went well and we kept A.I. under control, he said, he could envision a future where most people's lives were still largely the same, but where nearby 'special economic zones' filled with hyper-efficient robot factories would churn out everything we needed.
And if the next few years didn't go well?
'Maybe the sky would be filled with pollution, and the people would be dead?' he said nonchalantly. 'Something like that.'
One risk of dramatizing your A.I. predictions this way is that if you're not careful, measured scenarios can veer into apocalyptic fantasies. Another is that, by trying to tell a dramatic story that captures people's attention, you risk missing more boring outcomes, such as the scenario in which A.I. is generally well behaved and doesn't cause much trouble for anyone.
Even though I agree with the authors of 'AI 2027' that powerful A.I. systems are coming soon, I'm not convinced that superhuman A.I. coders will automatically pick up the other skills needed to bootstrap their way to general intelligence. And I'm wary of predictions that assume that A.I. progress will be smooth and exponential, with no major bottlenecks or roadblocks along the way.
But I think this kind of forecasting is worth doing, even if I disagree with some of the specific predictions. If powerful A.I. is really around the corner, we're all going to need to start imagining some very strange futures.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
7 minutes ago
- Business Insider
Inside Cursor's hiring strategy: no AI in interviews and a 2-day project with the team
AI coding assistant Cursor, built by Anysphere, bans AI in technical screeners. Anysphere's CEO said programming without AI is a "great time-boxed test for skill and intelligence." Cursor's hiring process ends with two-days at the office, where candidates build real projects with the team. To get hired at Cursor, an AI coding assistant built by Anysphere, you can't use AI in your interview. "We actually still interview people without allowing them to use AI, other than autocomplete, for first technical screens," said Michael Truell, the cofounder and CEO of Anysphere, on an episode of Y Combinator's podcast published Wednesday. "Programming without AI is still a really great time-boxed test for skill and intelligence," Truell said, adding that those are core qualities he looks for in a teammate. There's another reason behind the rule: fairness. "We've hired lots of people who are fantastic programmers who actually have no experience with AI tools," Truell said. "We would much rather hire those people and then teach them on the job." That beginner's mindset can be a product advantage, offering fresh insights from first-time users, he added. The final step of Cursor's hiring process isn't a traditional interview. Shortlisted candidates are invited to the company's office for two days. They work on a real project alongside the team, join in for meals, and demo what they've built at the end. Truell said this setup helps them spot people who are genuinely passionate about the "problem space" — not just shopping around for a job. "You're probably not going to be super willing to do that if you're maybe just viewing it as a job and you're applying to a bunch of technology companies at the same time," he added. Cursor also looks for engineers who are eager to experiment. Truell said the company encourages carving out time for "bottom-up experimentation" — sometimes even sectioning off teams to build independently. Truell said in an episode of "Lenny's Podcast" published in May that early hiring at Anysphere was slower than it should have been. The goal was to build a world-class group of engineers and researchers — "a certain mix of intellectual curiosity and experimentation," he said. Anysphere, Cursor's parent company, raised $900 million at a $9.9 billion valuation last month, the company said last week. Business Insider's Eugene Kim reported earlier this month that Amazon is in talks with Cursor to adopt the AI coding tool internally. Cursor did not respond to a request for comment from Business Insider. AI ban for job applications Anysphere isn't the only AI company banning the use of AI in job applications. Business Insider's Alistair Barr reported last month that leading AI startup Anthropic will not let candidates use AI when applying for jobs. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills,"Anthropic wrote in a job posting for an economist. The requirement was listed across multiple roles, including technical ones like machine learning systems engineer. About a week later, the company behind Claude backtracked on this policy. "We're having to evolve, even as the company at the forefront of a lot of this technology, around how we evaluate candidates," Mike Krieger, Anthropic's chief product officer, said during an interview on CNBC in May. "So our future interview loops will have much more of this ability to co-use AI."
Yahoo
an hour ago
- Yahoo
Export Curbs to Hit $8B in Q2 Sales: Is NVDA Overexposed to Trade War?
NVIDIA Corporation NVDA is expected to lose around $8 billion in revenues in the second quarter of fiscal 2026 due to new U.S. government export rules. These rules block shipments of its H20 AI chips to China. This will have a big impact on NVIDIA's second-quarter revenue growth. The company's projection of $45 billion for second-quarter revenue represents 2% sequential growth, the slowest in the past nine quarters. The H20 was made specifically for the Chinese market, and now NVIDIA can't ship it. As a result, the company had to write down $4.5 billion in unsold inventory and future chip orders. This shows how much NVIDIA depended on China for sales. There is another risk of losing one of the largest markets for AI chips. If NVIDIA can't access the Chinese market, its growth may slow down. On the first-quarter earnings call, the company's CEO, Jensen Huang, warned that these export controls could help China's local chipmakers catch up faster. To counter revenue losses from export restrictions to the Chinese market, NVIDIA is trying to grow in other regions. It is focusing on AI infrastructure deals in places like Saudi Arabia and the European Union. The company is also working with U.S. partners to build chips locally. However, none of these markets can fully replace China's size and speed of growth. In simple terms, NVIDIA is facing real risks from the ongoing trade tensions. The company still has strong products and demand in other regions, but being cut off from China could limit how fast it can grow in the future. While NVIDIA is taking the biggest hit, Advanced Micro Devices AMD and Intel INTC are also facing challenges from the U.S. government's AI chip export restrictions to China. Advanced Micro Devices was banned from exporting its MI308 graphics processing units (GPUs) to China, which compete directly with NVIDIA's data center GPUs. AMD had earlier stated that this export restriction would cost it around $800 million. China is a key growth market for AI accelerators, and missing out could limit Advanced Micro Devices' opportunity to scale this business line. Intel faces similar problems with its Gaudi 3 chips. The company has positioned Gaudi as a cost-effective AI solution, and China was expected to be part of its expansion strategy. However, with exports now restricted, Intel could struggle to hit volume targets, adding pressure to its already tight margins and turnaround efforts. Shares of NVIDIA have risen around 7.1% year to date against the Zacks Semiconductor – General industry's growth of 6%. Image Source: Zacks Investment Research From a valuation standpoint, NVDA trades at a forward price-to-earnings ratio of 30.48, slightly below the industry's average of 32.79. Image Source: Zacks Investment Research The Zacks Consensus Estimate for NVIDIA's fiscal 2026 and 2027 earnings implies a year-over-year increase of approximately 42% and 31%, respectively. The estimates for fiscal 2026 and fiscal 2027 have been revised upward in the past seven days and 30 days, respectively. Image Source: Zacks Investment Research NVIDIA currently carries a Zacks Rank #3 (Hold). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Intel Corporation (INTC) : Free Stock Analysis Report Advanced Micro Devices, Inc. (AMD) : Free Stock Analysis Report NVIDIA Corporation (NVDA) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Upturn
an hour ago
- Business Upturn
OpenAI in talks with Saudi PIF, Reliance for $40 billion fundraise, eyes additional $17 billion in 2027: Report
By News Desk Published on June 12, 2025, 09:15 IST OpenAI is reportedly in discussions with major investors from Saudi Arabia and India, including the Saudi Public Investment Fund (PIF) and Reliance Industries, as part of its ambitious $40 billion fundraising plan, according to The Information . The report highlights that OpenAI has held conversations with both Middle Eastern and Indian financial heavyweights, signalling the company's intent to broaden its global investor base. The discussions are part of OpenAI's efforts to secure capital for its AI infrastructure and long-term growth strategy. In addition to the current fundraising round, OpenAI has informed potential investors that it aims to raise another $17 billion in 2027, further underlining the scale of its future expansion plans. OPENAI HAS DISCUSSED RAISING MONEY FROM SAUDI ARABIA, INDIAN INVESTORS – THE INFORMATION OPENAI HAS TALKED TO SAUDI'S PIF AND INDIA'S RELIANCE FOR ITS $40 BILLION FUNDRAISE- THE INFORMATION OPENAI HAS TOLD INVESTORS IT WANTS TO RAISE ANOTHER $17 BILLION IN 2027 – THE… — First Squawk (@FirstSquawk) June 11, 2025 Meanwhile, OpenAI's lead investor SoftBank has reportedly been buying employee shares, indicating continued confidence in the AI firm's valuation and growth trajectory. The reported talks with sovereign wealth funds and major corporate entities come as OpenAI continues to lead the global artificial intelligence race with innovations like ChatGPT and partnerships across the tech ecosystem. News desk at