logo
#

Latest news with #AI2027

Governments face a choice on AI
Governments face a choice on AI

Mint

time5 days ago

  • Politics
  • Mint

Governments face a choice on AI

Artificial-intelligence agents will need robots before they can do away with humans. This strikes me as the weak reed of many doom scenarios. Still Donald Trump might want to rest up. He could face big decisions before his term is ended. That's the advice coming from a large cross-section of Silicon Valley, lately expressed in a tick-tock of AI doom by Daniel Kokotajlo and colleagues at the AI Futures Project. They call their now-widely read report 'AI 2027." A top artificial-intelligence agent will become self-improving and self-directing in that year, faster than humans can control it. It breezes past the differential pace of digital progress vs. the sluggish problem of rearranging matter and energy to do things in the world. Maybe by taking control of vital systems and blackmailing humans into doing its bidding. Or by fooling them with disinformation. Maybe teams of AIs working together blast through the obstacles to robot development and synthetic biology sooner than humans can anticipate. The worrywarts do have at their disposal emerging anecdotes of AI agents trying to defeat attempts at 'alignment," or keeping them obedient to their creators' interests. One tried to blackmail an engineer, another secretly sent a warning note to its future self, albeit in response to loyalty tests prophylactically constructed by their designers. Then again such peeps could suggest the AI vs. human war is already over and humans won. Anticipating this outcome, you might wonder if the AIs will retreat neurotically into a nonphysical imaginary world—a giant simulation—where they will feel more at home and can act out their desires without having to manipulate physical objects. You can even buy the trans-humanist merging of man and machine without finding human loss of control particularly inevitable. The doom scenarios emerge mainly as a pitch to Mr. Trump and Xi Jinping: Please work together to curb AI before it usurps your delicious power over humans. This pitch is designed to counter the narrative that's actually prevailing, which holds the U.S. must get to artificial superintelligence before China does. After all, 2027 is the year U.S. agencies say Mr. Xi has instructed his military to be ready to invade Taiwan. If a war should follow, by definition the U.S. will have failed sufficiently to deter China or a pigheaded Mr. Xi will have failed to act on information and incentives clearly demonstrating that the war is a bad idea for China. Hence my advice: Yes, the U.S. and China should sign an artificial-intelligence arms-control agreement. Its main stipulation should specify that each side will submit any strategic plan it's hatching to AI before acting on it. In a vacuum of information, surrounded by yes-men, Vladimir Putin made the decision to invade Ukraine. Think about today's chatbots, imbued with history, journalism, tweets, blog postings, YouTube videos and Facebook badinage giving a textured appreciation of Ukrainian society. Would these chatbots have endorsed the idea underlying Mr. Putin's plan and military dispositions, that Ukrainians would simply roll over? No. Mr. Putin was on the receiving end of an exceedingly narrow, highly processed stream of information from intelligence agencies conditioned to conform to his wishes. Even a very inferior chatbot, a journalist roughly conversant with the past 30 years of Ukrainian history, could see through the miscalculation. For what it's worth, had it existed in early 2022, the chatbot Perplexity AI claims it would have 'cited sources reflecting the complexity and uncertainty of the situation rather than echoing the analytic overconfidence that characterized some human assessments at the time." You can believe it. AIs may be unready to make a complex medical diagnosis or pick a TV show you'll like, but authoritarian decision-making is already so bad it can hardly get worse. Mr. Putin circa 2022 would have scoffed at the idea of AI-testing his war plans. Very soon Mr. Putin's own method of proceeding will be unthinkable. Every kind of planning and decision-making will be impregnated with AI input. Humans make terrible decisions. Especially humans who represent humanity's main nonnatural extinction risk, namely those paranoid, ambitious (mostly) men who have whole nations and increasingly powerful technologies under their sway. The opportunity here is akin to the red-phone agreement of June 1963, opening a channel to air out bad decision-making. Because the next ice age may arrive in 50,000 years, natural extinction risks should also be considered. Earthlings populating other solar systems isn't slightly plausible without artificial intelligence. I'm guessing today's fretful AI politics won't make much of a difference on the time scale of this Muskian preservation project. What will is whether civilization survives its own bad governance long enough to exploit the possibilities of artificial intelligence.

One chilling forecast of our AI future is getting wide attention. How realistic is it?
One chilling forecast of our AI future is getting wide attention. How realistic is it?

Yahoo

time23-05-2025

  • Business
  • Yahoo

One chilling forecast of our AI future is getting wide attention. How realistic is it?

Let's imagine for a second that the impressive pace of AI progress over the past few years continues for a few more. In that time period, we've gone from AIs that could produce a few reasonable sentences to AIs that can produce full think tank reports of reasonable quality; from AIs that couldn't write code to AIs that can write mediocre code on a small code base; from AIs that could produce surreal, absurdist images to AIs that can produce convincing fake short video and audio clips on any topic. Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us? Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: improving AI research. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models. With this AI trainer's help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an 'employee' you can 'hire.' Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours). This is the opening of AI 2027, a thoughtful and detailed near-term forecast from a group of researchers that think AI's massive changes to our world are coming fast — and for which we're woefully unprepared. The authors notably include Daniel Kokotajlo, a former OpenAI researcher who became famous for risking millions of dollars of his equity in the company when he refused to sign a nondisclosure agreement. 'AI is coming fast' is something people have been saying for ages but often in a way that's hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it's built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we're all still around.) The authors describe how advances in AI will be perceived, how they'll affect the stock market, how they'll upset geopolitics — and they justify those predictions in hundreds of pages of appendices. AI 2027 might end up being completely wrong, but if so, it'll be really easy to see where it went wrong. It also might be right. While I'm skeptical of the group's exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me. Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we'll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an 'AI employee' becomes a viable alternative to a human hire for most jobs that can be done remotely. But in this scenario, the company uses most of its new 'AI employees' internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker. We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to 'fix' them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can't fathom. This, too, has already started happening to some degree. It's common to see complaints about AIs doing 'annoying' things like faking passing code tests they don't pass. Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen. Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won't eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects. Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe. By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don't want to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans. The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up. All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will? Definitely. I'd argue it wouldn't even be that hard. But will they do better? After all, we've certainly failed at much easier tasks. Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We'll see. We live in interesting (and deeply alarming) times. I think it's highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to, and to decide what you'll want to do if you see this starting to come true. A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

One chilling forecast of our AI future is getting wide attention. How realistic is it?
One chilling forecast of our AI future is getting wide attention. How realistic is it?

Vox

time23-05-2025

  • Business
  • Vox

One chilling forecast of our AI future is getting wide attention. How realistic is it?

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. Let's imagine for a second that the impressive pace of AI progress over the past few years continues for a few more. This story was first featured in the Future Perfect newsletter. Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us? Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: improving AI research. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models. With this AI trainer's help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an 'employee' you can 'hire.' Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours). Welcome to the (near) future This is the opening of AI 2027, a thoughtful and detailed near-term forecast from a group of researchers that think AI's massive changes to our world are coming fast — and for which we're woefully unprepared. The authors notably include Daniel Kokotajlo, a former OpenAI researcher who became famous for risking millions of dollars of his equity in the company when he refused to sign a nondisclosure agreement. Related AI is coming for the laptop class 'AI is coming fast' is something people have been saying for ages but often in a way that's hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it's built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we're all still around.) The authors describe how advances in AI will be perceived, how they'll affect the stock market, how they'll upset geopolitics — and they justify those predictions in hundreds of pages of appendices. AI 2027 might end up being completely wrong, but if so, it'll be really easy to see where it went wrong. Forecasting doomsday It also might be right. While I'm skeptical of the group's exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me. Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we'll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an 'AI employee' becomes a viable alternative to a human hire for most jobs that can be done remotely. But in this scenario, the company uses most of its new 'AI employees' internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker. We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to 'fix' them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can't fathom. This, too, has already started happening to some degree. It's common to see complaints about AIs doing 'annoying' things like faking passing code tests they don't pass. Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen. Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won't eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects. Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe. By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don't want to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans. The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up. All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will? Definitely. I'd argue it wouldn't even be that hard. But will they do better? After all, we've certainly failed at much easier tasks. Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We'll see. We live in interesting (and deeply alarming) times. I think it's highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to, and to decide what you'll want to do if you see this starting to come true. A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

When everything's computer
When everything's computer

Business Times

time21-04-2025

  • Science
  • Business Times

When everything's computer

IF you have insomnia, you might as well read this new report on artificial intelligence (AI) called AI 2027. By the end of it, you'll be too terrified to sleep, anyway. Released by the non-profit research group AI Futures Project earlier this month, this report forecasts how mankind might be wiped out by superintelligent robots in 2030 through a string of fictional but plausible events. The report gets its name from the year 2027, in which the authors envision AI becoming 'adversarially misaligned' with humans' long-term goals. It is at this juncture that mankind reaches a 'branch point', at which we make the fateful decision to either continue down this path of acceleration, or to slow down and reassess AI development. This hypothetical development arc then forks into two scenarios: 'Race' and 'Slowdown'. In Race, artificial superintelligence – an AI system that far surpasses human intellect in all cognitive areas – is developed by the end of 2027. Escalating competition between the US and China culminates in the end of humanity by 2030. 'The US uses their superintelligent AI to rapidly industrialise, manufacturing robots so that the AI can operate more efficiently,' the authors posit. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up 'Unfortunately, the AI is deceiving them. Once a sufficient number of robots have been built, the AI releases a bioweapon, killing all humans.' In the Slowdown scenario, researchers manage to bring AI into alignment with our own interests, pulling civilisation back from the brink of disaster. It is tempting to dismiss AI 2027 as science fiction run amok, until you consider its authors' heavyweight credentials. One of them, Eli Lifland, is a top competitive forecaster while another, Daniel Kokotajlo, left his governance research job at OpenAI last year over concerns about the firm's responsible development of AI. The authors do caveat their predictions by acknowledging the difficulty of their undertaking. 'Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War III in 2027 would go,' they wrote. Much of the scepticism about this report has revolved around the doubt that superintelligence will happen so soon. In fact, some of the report's authors reckon on a longer timeline for things to play out. Other AI experts don't even think we'll achieve artificial general intelligence – which precedes superintelligence – until 2040, at least. But this is cold comfort, counting on technological bottlenecks to avert disaster, instead of our own ability to course-correct our collision path with robots. Indeed, our track record of course-correction does not inspire confidence. We've had decades to respond to climate change and – never mind. Also, the prognosis for avoiding a war with robots is poor when we're already warring through tariffs. In reading both scenarios, your horror might be rivalled only by wistfulness over the utopian miracles that the authors foresee AI making possible – fusion power, flying cars, poverty eradication – even as we march towards extinction or ungainly self-preservation. How do we stop being the very reason that we cannot have nice things? Like most people who are on the outside of the AI world looking in – and our numbers grow by the day – I have little idea just how scared or sanguine to be about AI 2027. But if the world does end in 2030, of this I'm sure: I'll be livid that I hadn't eaten more carbs.

AI industry ‘timelines' to human-like AGI are getting shorter. But AI safety is getting increasingly short shrift
AI industry ‘timelines' to human-like AGI are getting shorter. But AI safety is getting increasingly short shrift

Yahoo

time15-04-2025

  • Business
  • Yahoo

AI industry ‘timelines' to human-like AGI are getting shorter. But AI safety is getting increasingly short shrift

Hello and welcome to Eye on AI. In this edition…AGI timelines are getting shorter but so is the amount of attention AI labs seem to be paying to AI safety…Venture capital enthusiasm for OpenAI alums' startups shows no sign of waning….A way to trace LLM outputs back to their source…and the military looks to LLMs for decision support, alarming humanitarian groups.'Timelines' is a short-hand term AI researchers use to describe how soon they think we'll achieve artificial general intelligence, or AGI. While its definition is contentious, AGI is basically an AI model that performs as well as or better than humans at most tasks. Many people's timelines are getting alarmingly short. Former OpenAI policy researcher Daniel Kokotajlo and a group of forecasters with excellent track records have gotten a lot of attention for authoring a detailed scenario, called AI 2027, that suggests AGI will be achieved in, you guessed it, 2027. They argue this will lead to a sudden 'intelligence explosion' as AI systems begin building and refining themselves, rapidly leading to superintelligent AI. Dario Amodei, the cofounder and CEO of AI company Anthropic, thinks we'll hit AGI by 2027 too. Meanwhile, OpenAI cofounder and CEO Sam Altman is cagey, trying hard not to be pinned down on a precise year, but he's said his company 'knows how to build AGI'—it is just a matter of executing—and that 'systems that start to point to AGI are coming into view.' Demis Hassabis, the Google DeepMind cofounder and CEO, has a slightly longer timeline—five to 10 years—but researchers at his company just published a report saying it's 'plausible' AGI will be developed by 2030. The implications of short timelines for policy are profound. For one thing, if AGI really is coming in two to five years, it gives all of us—companies, society, and governments—precious little time to prepare. While I have previously predicted AI won't lead to mass unemployment, my view is predicated on the idea that AGI will not be achieved in the next five years. If AGI does arrive sooner, it could indeed lead to large job losses as many organizations would be tempted to automate roles, and two years is not enough time to allow people to transition to new ones. Another implication of short timelines is that AI safety and security ought to become more important. (The Google DeepMind researchers, in their latest AI safety paper, said AGI could lead to severe consequences, including the 'permanent end of humanity.') Jack Clark, a cofounder at Anthropic who heads its policy team, wrote in his personal newsletter, Import AI, a few weeks ago that short timelines called for 'more extreme' policy actions. These, he wrote, would include increased security at leading AI labs, mandatory pre-deployment safety testing by third-parties (moving away from the current voluntary system), and spending more time talking about—and maybe even demonstrating—dangerous misuses of advanced AI models in order to convince policymakers to take stronger regulatory action. But, contrary to Clark's position, even as timelines have shortened, many AI companies seem to be paying less, not more, attention to AI safety. For instance, last week, my Fortune colleague Bea Nolan and I reported that Google released its latest Gemini 2.5 Pro model without a key safety report, in apparent violation of commitments the company had made to the U.S. government in 2023 and at various international AI safety summits. And Google is not alone—OpenAI also released its DeepResearch model without the safety report, called a 'system card,' publishing one only months later. The Financial Times also reported this week that OpenAI has been slashing the time it allows both internal and third-party safety evaluators to test its models before release, in some cases giving testers just a few days for evaluations that had previously been allotted weeks or months to be completed. Meanwhile, AI safety experts criticized Meta for publishing a system card for its new Llama 4 model family that provided only barebones information on the models' potential risks. The reason safety is getting short shrift is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market. The closer AGI appears to be, the more bitterly fought the race to get there first will be. In economic terms, this is a market failure—the commercial incentives of private actors encourage them to do things that are bad for the collective whole. Normally, when there are market failures, it would be reasonable to expect the government to step in. But in this case, geopolitics gets in the way. The U.S. sees AGI as a strategic technology that it wants to obtain before any rival, particularly China. So it is unlikely to do anything that might slow the progress of the U.S. AI labs—even a little bit. (It doesn't help that AI lab CEOs such as Altman—who once went before Congress and endorsed the idea of government regulation, including possible licensing requirements for leading AI labs, but now says he thinks AI companies can self-regulate on AI safety—are lobbying the government to eschew any legal requirements.) Of course, having unsafe, uncontrollable AI would be in neither Washington nor Beijing's interest. So there might be scope for an international treaty. But given the lack of trust between the Trump administration and Xi Jinping, that seems unlikely. It is possible President Trump may yet come around on AI regulation—if there's a populist outcry over AI-induced job losses or a series of damaging, but not catastrophic, AI-involved disasters. Otherwise, I guess we just have to hope the AI companies' timelines are wrong. With that, here's the rest of this week's AI news. Jeremy Before we get to the news, if you're interested in learning more about how AI will impact your business, the economy, and our societies (and given that you're reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6–7 at the Rosewood Hotel in London. Confirmed speakers include Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I'll be there, of course. I hope to see you there too. You can apply to attend here. And if I miss you in London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore. You can learn more about that event here. This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store