AI industry ‘timelines' to human-like AGI are getting shorter. But AI safety is getting increasingly short shrift
Hello and welcome to Eye on AI. In this edition…AGI timelines are getting shorter but so is the amount of attention AI labs seem to be paying to AI safety…Venture capital enthusiasm for OpenAI alums' startups shows no sign of waning….A way to trace LLM outputs back to their source…and the military looks to LLMs for decision support, alarming humanitarian groups.'Timelines' is a short-hand term AI researchers use to describe how soon they think we'll achieve artificial general intelligence, or AGI. While its definition is contentious, AGI is basically an AI model that performs as well as or better than humans at most tasks. Many people's timelines are getting alarmingly short. Former OpenAI policy researcher Daniel Kokotajlo and a group of forecasters with excellent track records have gotten a lot of attention for authoring a detailed scenario, called AI 2027, that suggests AGI will be achieved in, you guessed it, 2027. They argue this will lead to a sudden 'intelligence explosion' as AI systems begin building and refining themselves, rapidly leading to superintelligent AI.
Dario Amodei, the cofounder and CEO of AI company Anthropic, thinks we'll hit AGI by 2027 too. Meanwhile, OpenAI cofounder and CEO Sam Altman is cagey, trying hard not to be pinned down on a precise year, but he's said his company 'knows how to build AGI'—it is just a matter of executing—and that 'systems that start to point to AGI are coming into view.' Demis Hassabis, the Google DeepMind cofounder and CEO, has a slightly longer timeline—five to 10 years—but researchers at his company just published a report saying it's 'plausible' AGI will be developed by 2030.
The implications of short timelines for policy are profound. For one thing, if AGI really is coming in two to five years, it gives all of us—companies, society, and governments—precious little time to prepare. While I have previously predicted AI won't lead to mass unemployment, my view is predicated on the idea that AGI will not be achieved in the next five years. If AGI does arrive sooner, it could indeed lead to large job losses as many organizations would be tempted to automate roles, and two years is not enough time to allow people to transition to new ones.
Another implication of short timelines is that AI safety and security ought to become more important. (The Google DeepMind researchers, in their latest AI safety paper, said AGI could lead to severe consequences, including the 'permanent end of humanity.')
Jack Clark, a cofounder at Anthropic who heads its policy team, wrote in his personal newsletter, Import AI, a few weeks ago that short timelines called for 'more extreme' policy actions. These, he wrote, would include increased security at leading AI labs, mandatory pre-deployment safety testing by third-parties (moving away from the current voluntary system), and spending more time talking about—and maybe even demonstrating—dangerous misuses of advanced AI models in order to convince policymakers to take stronger regulatory action.
But, contrary to Clark's position, even as timelines have shortened, many AI companies seem to be paying less, not more, attention to AI safety. For instance, last week, my Fortune colleague Bea Nolan and I reported that Google released its latest Gemini 2.5 Pro model without a key safety report, in apparent violation of commitments the company had made to the U.S. government in 2023 and at various international AI safety summits. And Google is not alone—OpenAI also released its DeepResearch model without the safety report, called a 'system card,' publishing one only months later. The Financial Times also reported this week that OpenAI has been slashing the time it allows both internal and third-party safety evaluators to test its models before release, in some cases giving testers just a few days for evaluations that had previously been allotted weeks or months to be completed. Meanwhile, AI safety experts criticized Meta for publishing a system card for its new Llama 4 model family that provided only barebones information on the models' potential risks.
The reason safety is getting short shrift is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market. The closer AGI appears to be, the more bitterly fought the race to get there first will be.
In economic terms, this is a market failure—the commercial incentives of private actors encourage them to do things that are bad for the collective whole. Normally, when there are market failures, it would be reasonable to expect the government to step in. But in this case, geopolitics gets in the way. The U.S. sees AGI as a strategic technology that it wants to obtain before any rival, particularly China. So it is unlikely to do anything that might slow the progress of the U.S. AI labs—even a little bit. (It doesn't help that AI lab CEOs such as Altman—who once went before Congress and endorsed the idea of government regulation, including possible licensing requirements for leading AI labs, but now says he thinks AI companies can self-regulate on AI safety—are lobbying the government to eschew any legal requirements.)
Of course, having unsafe, uncontrollable AI would be in neither Washington nor Beijing's interest. So there might be scope for an international treaty. But given the lack of trust between the Trump administration and Xi Jinping, that seems unlikely. It is possible President Trump may yet come around on AI regulation—if there's a populist outcry over AI-induced job losses or a series of damaging, but not catastrophic, AI-involved disasters. Otherwise, I guess we just have to hope the AI companies' timelines are wrong.
With that, here's the rest of this week's AI news.
Jeremy Kahnjeremy.kahn@fortune.com@jeremyakahn
Before we get to the news, if you're interested in learning more about how AI will impact your business, the economy, and our societies (and given that you're reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6–7 at the Rosewood Hotel in London. Confirmed speakers include Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I'll be there, of course. I hope to see you there too. You can apply to attend here.
And if I miss you in London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore. You can learn more about that event here.
This story was originally featured on Fortune.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
35 minutes ago
- Yahoo
Meta's Zuckerberg hiring for new 'superintelligence' AI team, Bloomberg News reports
(Reuters) -Mark Zuckerberg, the billionaire CEO of Meta Platforms is setting up a team of experts to achieve so-called "artificial general intelligence" (AGI), or machines that can match or surpass human capabilities, Bloomberg News reported on Tuesday. Zuckerberg is building the new AI team in tandem with a reported investment of over $10 billion in Scale AI, Bloomberg News said citing sources, adding that Scale AI founder Alexandr Wang was expected to join the AGI group after a deal is done. Reuters could not immediately verify the Bloomberg report. Meta did not immediately respond to a request for comment outside regular business hours. Zuckerberg's plans to personally recruit around 50 people, including a new head of AI research for the AGI team is driven partly by frustration over the performance and reception of Meta's latest large language model, Llama 4, Bloomberg News reported. Last month, Meta delayed the release of its flagship "Behemoth" AI model due to concerns about its capabilities, the Wall Street Journal reported. Rivals like OpenAI have also been looking to make changes to attract further investment in a bid to develop AGI. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Entrepreneur
an hour ago
- Entrepreneur
Meta Considers Over USD 10 Billion Investment in Scale AI
Meta had already participated in Scale AI's USD 1 billion Series F round last year, which valued the firm at USD 13.8 billion You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Meta is reportedly in discussions to invest more than USD 10 billion in Scale AI, in what could become its largest external artificial intelligence (AI) investment to date, according to multiple media sources. If finalised, the deal would also rank among the largest private funding rounds ever seen in the AI sector. Scale AI, led by CEO Alexandr Wang, specialises in data labelling services that support the development of machine learning models for companies such as Microsoft and OpenAI. Meta had already participated in Scale AI's USD 1 billion Series F round last year, which valued the firm at USD 13.8 billion. Further reports suggest that Scale AI is exploring a tender offer that could lift its valuation to as high as USD 25 billion. Other major investors in the company include Microsoft. Scale AI has also played a role in defence technology, having developed 'Defence Llama'—a large language model built for military use based on Meta's Llama 3. With a mission to accelerate the progress of AI, Scale AI aims to streamline the transition from raw data to functional AI models. Its core belief is that high-quality, well-labelled data is key to better AI outcomes. The company provides scalable, reliable infrastructure that supports organisations across industries from healthcare to transport seeking to harness the full potential of AI.


Bloomberg
an hour ago
- Bloomberg
Mark Zuckerberg Personally Hiring to Create New 'Superintelligence' AI Team
Mark Zuckerberg, frustrated with Meta Platforms Inc. 's shortfalls in AI, is assembling a team of experts to achieve artificial general intelligence, recruiting from a brain trust of AI researchers and engineers who've met with him in recent weeks at his homes in Lake Tahoe and Palo Alto. Zuckerberg has prioritized recruiting for the secretive new team, referred to internally as a superintelligence group, according to people familiar with his plans. He has an audacious goal in mind, these people said. In his view, Meta can and should outstrip other tech companies in achieving what's known as artificial general intelligence or AGI, the notion that machines can perform as well as humans at many tasks. Once Meta reaches that milestone, it could weave the capability into its suite of products — not just social media and communications platforms, but also a range of AI tools, including the Meta chatbot and its AI-powered Ray-Ban glasses.