AI industry ‘timelines' to human-like AGI are getting shorter. But AI safety is getting increasingly short shrift
Hello and welcome to Eye on AI. In this edition…AGI timelines are getting shorter but so is the amount of attention AI labs seem to be paying to AI safety…Venture capital enthusiasm for OpenAI alums' startups shows no sign of waning….A way to trace LLM outputs back to their source…and the military looks to LLMs for decision support, alarming humanitarian groups.'Timelines' is a short-hand term AI researchers use to describe how soon they think we'll achieve artificial general intelligence, or AGI. While its definition is contentious, AGI is basically an AI model that performs as well as or better than humans at most tasks. Many people's timelines are getting alarmingly short. Former OpenAI policy researcher Daniel Kokotajlo and a group of forecasters with excellent track records have gotten a lot of attention for authoring a detailed scenario, called AI 2027, that suggests AGI will be achieved in, you guessed it, 2027. They argue this will lead to a sudden 'intelligence explosion' as AI systems begin building and refining themselves, rapidly leading to superintelligent AI.
Dario Amodei, the cofounder and CEO of AI company Anthropic, thinks we'll hit AGI by 2027 too. Meanwhile, OpenAI cofounder and CEO Sam Altman is cagey, trying hard not to be pinned down on a precise year, but he's said his company 'knows how to build AGI'—it is just a matter of executing—and that 'systems that start to point to AGI are coming into view.' Demis Hassabis, the Google DeepMind cofounder and CEO, has a slightly longer timeline—five to 10 years—but researchers at his company just published a report saying it's 'plausible' AGI will be developed by 2030.
The implications of short timelines for policy are profound. For one thing, if AGI really is coming in two to five years, it gives all of us—companies, society, and governments—precious little time to prepare. While I have previously predicted AI won't lead to mass unemployment, my view is predicated on the idea that AGI will not be achieved in the next five years. If AGI does arrive sooner, it could indeed lead to large job losses as many organizations would be tempted to automate roles, and two years is not enough time to allow people to transition to new ones.
Another implication of short timelines is that AI safety and security ought to become more important. (The Google DeepMind researchers, in their latest AI safety paper, said AGI could lead to severe consequences, including the 'permanent end of humanity.')
Jack Clark, a cofounder at Anthropic who heads its policy team, wrote in his personal newsletter, Import AI, a few weeks ago that short timelines called for 'more extreme' policy actions. These, he wrote, would include increased security at leading AI labs, mandatory pre-deployment safety testing by third-parties (moving away from the current voluntary system), and spending more time talking about—and maybe even demonstrating—dangerous misuses of advanced AI models in order to convince policymakers to take stronger regulatory action.
But, contrary to Clark's position, even as timelines have shortened, many AI companies seem to be paying less, not more, attention to AI safety. For instance, last week, my Fortune colleague Bea Nolan and I reported that Google released its latest Gemini 2.5 Pro model without a key safety report, in apparent violation of commitments the company had made to the U.S. government in 2023 and at various international AI safety summits. And Google is not alone—OpenAI also released its DeepResearch model without the safety report, called a 'system card,' publishing one only months later. The Financial Times also reported this week that OpenAI has been slashing the time it allows both internal and third-party safety evaluators to test its models before release, in some cases giving testers just a few days for evaluations that had previously been allotted weeks or months to be completed. Meanwhile, AI safety experts criticized Meta for publishing a system card for its new Llama 4 model family that provided only barebones information on the models' potential risks.
The reason safety is getting short shrift is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market. The closer AGI appears to be, the more bitterly fought the race to get there first will be.
In economic terms, this is a market failure—the commercial incentives of private actors encourage them to do things that are bad for the collective whole. Normally, when there are market failures, it would be reasonable to expect the government to step in. But in this case, geopolitics gets in the way. The U.S. sees AGI as a strategic technology that it wants to obtain before any rival, particularly China. So it is unlikely to do anything that might slow the progress of the U.S. AI labs—even a little bit. (It doesn't help that AI lab CEOs such as Altman—who once went before Congress and endorsed the idea of government regulation, including possible licensing requirements for leading AI labs, but now says he thinks AI companies can self-regulate on AI safety—are lobbying the government to eschew any legal requirements.)
Of course, having unsafe, uncontrollable AI would be in neither Washington nor Beijing's interest. So there might be scope for an international treaty. But given the lack of trust between the Trump administration and Xi Jinping, that seems unlikely. It is possible President Trump may yet come around on AI regulation—if there's a populist outcry over AI-induced job losses or a series of damaging, but not catastrophic, AI-involved disasters. Otherwise, I guess we just have to hope the AI companies' timelines are wrong.
With that, here's the rest of this week's AI news.
Jeremy Kahnjeremy.kahn@fortune.com@jeremyakahn
Before we get to the news, if you're interested in learning more about how AI will impact your business, the economy, and our societies (and given that you're reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6–7 at the Rosewood Hotel in London. Confirmed speakers include Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I'll be there, of course. I hope to see you there too. You can apply to attend here.
And if I miss you in London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore. You can learn more about that event here.
This story was originally featured on Fortune.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
People Are Asking ChatGPT for Relationship Advice and It's Ending in Disaster
Despite ChatGPT's well-documented issues, people are using it to advise them on relationship issues — and it's going about as well as you'd expect. In a new editorial, Vice advice columnist Sammi Caramela said she had been blissfully unaware of the ChatGPT-as-therapist trend until someone wrote into her work email about it earlier this year. Back in February, an unnamed man told the writer that his girlfriend refused to stop using the chatbot for dating advice and would even bring up things it had told her in arguments. Though Caramela was so shocked that she "nearly choked" on her coffee, the advice-seeker wasn't all that perturbed — and claimed that he found his girlfriend's ChatGPT use fascinating. "I was a bit floored by this confession. I had no idea people were actually turning to AI for advice, much less input on their relationships," the columnist wrote in her more recent piece. "However, the more I explored the topic, the more I realized how common it was to seek help from AI — especially in an era where therapy is an expensive luxury." Intrigued, Caramela found a friend who used the OpenAI chatbot for similar purposes, running relationship issues by it as a "non-biased" sounding board. Eventually, that person realized that ChatGPT wasn't unbiased at all, but rather "seemed to heavily validate her experience, perhaps dangerously so." Similar questions have been posed on the r/ChatGPT subreddit, and as Caramela explained, the consensus over there suggested not only that the chatbot is something of a "yes-man," but also that its propensity to agree with users can be dangerous for people who have mental health issues. "I often and openly write about my struggles with obsessive-compulsive disorder (OCD)," the writer divulged. "If I went to ChatGPT for dating advice and failed to mention how my OCD tends to attack my relationships, I might receive unhelpful, even harmful, input about my relationship." Digger deeper into the world of ChatGPT therapy, Caramela found multiple threads on OCD-related subreddits about the chatbot — and on the forum dedicated to ROCD, or relationship-focused OCD, someone even admitted that the chatbot told them to break up with their partner. "Programs like ChatGPT only speed the OCD cycle up because you can ask question after question for hours trying to gain some sense of certainty," another user responded in the r/ROCD thread. "There's always another 'what if' question with OCD." Like so many poorly-trained human professionals, chatbots aren't equipped to handle the nuance and sensitivity needed in any therapeutic context. Regardless of what OpenAI claims in its marketing, ChatGPT can't be truly empathetic — and if your "therapist" will never be able to have a human-to-human connection, why would you want it to give you dating advice in the first place? More on chatbot blues: Hanky Panky With Naughty AI Still Counts as Cheating, Therapist Says


Forbes
an hour ago
- Forbes
Could An Ingenious Solo Coder Be The Tipping Point For Turning AI Into AGI?
In today's column, I address an often-discussed AI insider chitchat that someday there is going to be an ingenious solo coder aka an individual independent AI developer who will miraculously advance conventional AI into becoming AGI (artificial general intelligence). All by themselves. This seems to be the dream of many AI developers, for good reasons, since such a person would undoubtedly be showered with immense acclaim and grandiose riches. Their name would certainly go into history books forever. The nagging question is whether that kind of feat could realistically occur. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Billions upon billions of dollars are being spent nowadays to try and arrive at AGI. Massive teams of AI developers are toiling at work daily. The assumption is that the more people you toss at the problem, the higher the chances of solving the problem. The problem is how to turn conventional AI into AGI. Nobody knows how to accomplish this outstretched goal. Maybe, just maybe, there is a chosen one out there that will astoundingly be akin to Neo in The Matrix and be able to make AGI become a reality. I'd bet that a lot of today's AI developers believe in their present minds that they are that person. Sitting there at work, alongside an army of other AI developers, they know in their heart of hearts that only they ultimately hold the key to attaining AGI. The riches, the fame, the personal sense of accomplishment, it's all a huge allure. Some might believe that their quest is based on benefiting humanity. They set aside any personal glory. Their motivation is that AGI will presumably help cure cancer and potentially solve many of the world's difficulties. It is such a tempting consideration that some AI developers leave an AI maker and start their own firm to pursue AGI. Those dolts and laggards that were holding them back are no longer a boat anchor. Now, they have the freedom to work 24/7 and keep their mind laser-focused on achieving AGI. No distractions, no workplace gossip, no dealing with moving up the ladder. Straight heads-down obsessive preoccupation with landing at AGI. This belief that one person alone can attain AGI is typically referred to as the lone wolf condition, a type of maverick mentality. Cultures vary as to the assumption that large-scale accomplishments can be attained via an individual versus a group effort. Some would say that it is preposterous to imagine that AGI would be arrived at by one person working alone. Not possible. Impossible. A crucial factor is whether the individual is working totally from scratch or whether they are building on top of prior work. The idea of devising AGI from scratch would imply that the person sets aside all prior AI efforts and begins with a blank sheet of paper. They depart from everything that has come before. In an ingenious fashion that no one right now can describe, this person comes up with a new way to craft AI and that will lead to AGI. Wow, that's a tall order. It seems highly unlikely. The odds are that they would need to use at least some aspects of AI that we already know and have in hand. A counterargument is that maybe the current AI path is completely wrong and won't get us to AGI. The only smart thing to do is chuck what we have now and take a fresh look at things. Maybe that might be the final bit of trickery, but I wouldn't bet the farm on it. Let's assume that the more probable route is that the lone individual builds on what we already have. Sure, they might make changes here or there but generally they would be leveraging existing state-of-the-art AI. The lone wolf seems to have a much better chance if it turns out that they are at the vital tipping point. In essence, AI had been advanced to the point that it was nearing AGI. The last mile, as it were, hadn't yet been figured out. The enterprising solo coder managed to bridge that final gap. Voila, they got us to AGI. Do they deserve outstanding accolades for this feat? One argument is that they stood on the backs of others and should not take undue credit as though they individually won the big prize. Their bridging might be clever, it might be the final piece in a puzzling puzzle, this though doesn't grant them the right to proclaim they invented or discovered AGI. Whoa, bellows the retort, to the victor go the spoils. If this maverick found the final magic, by gosh, they deserve to be anointed as the founder of AGI. That's the way the ball bounces. You could say the same about almost all great inventions and inventors, namely they alone did not do everything from scratch and managed to build on whatever came before their time. One notable concern is that if a lone wolf did arrive at AGI, they might be ill-prepared to keep the AGI in proper check. The logic is as follows. We already know that current AI can make mistakes and produce errors, see my discussion at the link here. A large AI maker would presumably have established all sorts of safety and security precautions, anticipating that AGI ought to not be let loose wantonly. An individual working in their pajamas in their garage is seemingly not going to be taking similar precautions. Hogwash, some say to that concern. If you believe that large AI makers will be ready for AGI, you might as well buy swamp land. AI makers talk a good game, but they aren't going to be much better at bounding AGI as would even a lone individual. Get off the back of the lone wolf and don't fault them for something that any AI maker would indubitably be faulted for. Another concern is that AGI will be a geo-political tool for nations to dominate other nations (see my analysis at the link here), and the aspect that a lone individual would have AGI in their solo hands is rather frightening. What is the coder going to do with AGI? They might decide to keep it a secret. There is a solid chance that the use of AGI could make them rich by using the AI to make investments or find abundantly clever ways to ample riches. No need to inform anyone else about your secret sauce. Keep it to yourself. The downside there is that if indeed AGI can cure cancer and solve other global issues, the person is hogging AGI and denying the rest of humanity a chance at great relief and peace. The lone wolf might dangerously hand AGI over to evildoers, either by accident or on purpose. Those evildoers then use AGI to threaten nations and try to take over the world. This individual might not have anticipated how the AGI could be utilized to wreak havoc and harm humanity. Nonetheless, they let the horse out of the barn, and we are all the worse off accordingly. Does this rugged individual that attains AGI have to be ingenious? Some would say that such a person doesn't have to be much more than average. Here's the deal. Under the tipping point notion, also labeled as the last straw theory, a solo coder might try something that just so happens to tilt AI into becoming AGI. The developer might not have had any clue beforehand that they would prevail in attaining AGI. When Edison was trying to find the right filament to make viable light bulbs, he reportedly made several thousand attempts. It could be that a solo coder just keeps grinding away and manages to get lucky by finding the right aspect that shifts AI into AGI. A chuckle among AI insiders is that it would be ironic if some mediocre AI developer was the one who happened to fall into the miracle of achieving AGI. Imagine the frustration and exasperation of all the world's brightest AI developers. They would be chagrined that a less-than-stellar AI person almost mindlessly struck gold by sheer dumb luck. That idea that being ingenious isn't a necessary requirement gives tremendous hope to a lot of those AI developers who are desirous of being the one solo coder to discover or invent AGI. Though other AI developers might thumb their nose at you now, aha, wait until you are sitting on top of the world as the celebrated deviser of AGI. Remember the famous line by Robert Frost: 'Two roads diverged in a wood, and I took the one less traveled by, and that has made all the difference.' Good luck to all those adventurous mavericks seeking to achieve AGI.


TechCrunch
an hour ago
- TechCrunch
Sam Altman thinks AI will have ‘novel insights' next year
In a new essay published Tuesday called 'The Gentle Singularity,' OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will 'likely see the arrival of [AI] systems that can figure out novel insights.' While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar — a preview of where OpenAI is likely headed next.