logo
One chilling forecast of our AI future is getting wide attention. How realistic is it?

One chilling forecast of our AI future is getting wide attention. How realistic is it?

Yahoo23-05-2025

Let's imagine for a second that the impressive pace of AI progress over the past few years continues for a few more.
In that time period, we've gone from AIs that could produce a few reasonable sentences to AIs that can produce full think tank reports of reasonable quality; from AIs that couldn't write code to AIs that can write mediocre code on a small code base; from AIs that could produce surreal, absurdist images to AIs that can produce convincing fake short video and audio clips on any topic.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
Companies are pouring billions of dollars and tons of talent into making these models better at what they do. So where might that take us?
Imagine that later this year, some company decides to double down on one of the most economically valuable uses of AI: improving AI research. The company designs a bigger, better model, which is carefully tailored for the super-expensive yet super-valuable task of training other AI models.
With this AI trainer's help, the company pulls ahead of its competitors, releasing AIs in 2026 that work reasonably well on a wide range of tasks and that essentially function as an 'employee' you can 'hire.' Over the next year, the stock market soars as a near-infinite number of AI employees become suitable for a wider and wider range of jobs (including mine and, quite possibly, yours).
This is the opening of AI 2027, a thoughtful and detailed near-term forecast from a group of researchers that think AI's massive changes to our world are coming fast — and for which we're woefully unprepared. The authors notably include Daniel Kokotajlo, a former OpenAI researcher who became famous for risking millions of dollars of his equity in the company when he refused to sign a nondisclosure agreement.
'AI is coming fast' is something people have been saying for ages but often in a way that's hard to dispute and hard to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it's built to be falsifiable — every prediction is specific and detailed enough that it will be easy to decide if it came true after the fact. (Assuming, of course, we're all still around.)
The authors describe how advances in AI will be perceived, how they'll affect the stock market, how they'll upset geopolitics — and they justify those predictions in hundreds of pages of appendices. AI 2027 might end up being completely wrong, but if so, it'll be really easy to see where it went wrong.
It also might be right.
While I'm skeptical of the group's exact timeline, which envisions most of the pivotal moments leading us to AI catastrophe or policy intervention as happening during this presidential administration, the series of events they lay out is quite convincing to me.
Any AI company would double down on an AI that improves its AI development. (And some of them may already be doing this internally.) If that happens, we'll see improvements even faster than the improvements from 2023 to now, and within a few years, there will be massive economic disruption as an 'AI employee' becomes a viable alternative to a human hire for most jobs that can be done remotely.
But in this scenario, the company uses most of its new 'AI employees' internally, to keep churning out new breakthroughs in AI. As a result, technological progress gets faster and faster, but our ability to apply any oversight gets weaker and weaker. We see glimpses of bizarre and troubling behavior from advanced AI systems and try to make adjustments to 'fix' them. But these end up being surface-level adjustments, which just conceal the degree to which these increasingly powerful AI systems have begun pursuing their own aims — aims which we can't fathom. This, too, has already started happening to some degree. It's common to see complaints about AIs doing 'annoying' things like faking passing code tests they don't pass.
Not only does this forecast seem plausible to me, but it also appears to be the default course for what will happen. Sure, you can debate the details of how fast it might unfold, and you can even commit to the stance that AI progress is sure to dead-end in the next year. But if AI progress does not dead-end, then it seems very hard to imagine how it won't eventually lead us down the broad path AI 2027 envisions, sooner or later. And the forecast makes a convincing case it will happen sooner than almost anyone expects.
Make no mistake: The path the authors of AI 2027 envision ends with plausible catastrophe.
By 2027, enormous amounts of compute power would be dedicated to AI systems doing AI research, all of it with dwindling human oversight — not because AI companies don't want to oversee it but because they no longer can, so advanced and so fast have their creations become. The US government would double down on winning the arms race with China, even as the decisions made by the AIs become increasingly impenetrable to humans.
The authors expect signs that the new, powerful AI systems being developed are pursuing their own dangerous aims — and they worry that those signs will be ignored by people in power because of geopolitical fears about the competition catching up, as an AI existential race that leaves no margin for safety heats up.
All of this, of course, sounds chillingly plausible. The question is this: Can people in power do better than the authors forecast they will?
Definitely. I'd argue it wouldn't even be that hard. But will they do better? After all, we've certainly failed at much easier tasks.
Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We'll see.
We live in interesting (and deeply alarming) times. I think it's highly worth giving AI 2027 a read to make the vague cloud of worry that permeates AI discourse specific and falsifiable, to understand what some senior people in the AI world and the government are paying attention to, and to decide what you'll want to do if you see this starting to come true.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What the UK's Emergency Services Can Teach Startups About AI That Actually Works -- New Report from Speechmatics
What the UK's Emergency Services Can Teach Startups About AI That Actually Works -- New Report from Speechmatics

Yahoo

time27 minutes ago

  • Yahoo

What the UK's Emergency Services Can Teach Startups About AI That Actually Works -- New Report from Speechmatics

New report reveals how Voice AI is quietly transforming frontline response while others scale back CAMBRIDGE, England, June 03, 2025--(BUSINESS WIRE)--While many "AI-first" launches have fizzled, emergency services are showing what it really means to put AI to work. A new report from Speechmatics reveals that 100% of UK ambulance calls now run through Voice AI infrastructure. Titled The Voice AI Reality Check: Frontline Perspectives for Enterprise in 2025, the report explores how speech systems are quietly transforming high-pressure environments like triage, crisis response and public service contact centres. Forget avatars and demos — real-world adoption is being driven by accuracy, speed, and reliability. Key findings include: Full Voice AI coverage across UK ambulance services 40% clinician time reclaimed in public healthcare settings 93% satisfaction using voice-led systems in utilities Hybrid deployments (on-prem, cloud, edge) now expected as standard Speech AI technology sits behind these critical deployments — enabling real-time transcription, multilingual support, and data privacy at scale. As enterprises rethink their AI strategies, this report highlights the sectors already making it work — and what others can learn from them. Download the full report now. View source version on Contacts For interviews, images, quotes or media access, contact: Mieke Kyra, Content Lead Sign in to access your portfolio

Five essential skills for building AI-ready teams
Five essential skills for building AI-ready teams

Entrepreneur

time38 minutes ago

  • Entrepreneur

Five essential skills for building AI-ready teams

AI is developing at a rapid pace and transforming the way global industries operate. As companiesaccelerate AI adoption in order to stay competitive and reap its potential benefits, the urgency forbuilding AI-ready capabilities in the organisation is increasing. Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur United Kingdom, an international franchise of Entrepreneur Media. The true value of AI-based solutions depends on the teams who understand, challenge, embrace and integrate it wisely. In my new book, Artificial Intelligence For Business, I highlight the impact of AI on the future of work, specifically the skills gaps and job displacements, as well as future essential skills required in global organisations. For business leaders, building AI-ready teams means more than just hiring technical experts or data scientists. Success in the AI business landscape means upskilling the workforce to develop five essential capabilities that will enable people to thrive. AI literacy and understanding Knowledge and understanding of AI can seem overwhelming, particularly to those in non-technical roles who may struggle with the constant flood of information about large language models, Python codes and AI platform functionalities. While not everyone needs to have a deep technical understanding for how to develop AI-based solutions, they should understand what AI can do and where its limitations lie. AI literacy goes beyond a mere understanding of AI technologies. It involves building foundational understanding of its context and value, as well as the ability to question its design and implementation. AI literacy should be developed across all teams in an organisation, including how AI works, the different types of AI solutions, how data is used, where bias can creep in, and what real world applications look like in the relevant industry. Building AI literacy begins with organisational education and training programs that offer executive- level understanding of AI capabilities, limitations and risks, as well as industry-specific applications. Additionally, hands-on experience and real-world applications are critical in developing an understanding AI in a business context. The aim is to raise the level of understanding to ensure every AI-related business decision is made with awareness and purpose. Critical thinking and data scepticism As we increasingly apply AI-based technologies in our daily business, the outcomes can be quite compelling. The potential productivity gains and scale of benefit are driving organisations to implement AI-based solutions across various business functions. The outputs of AI tools may appear clean and professional, but may not always be rooted in accuracy or truth. In addition, there may be hidden biases that could be detrimental, particularly if the outputs are used in critical decision- making processes. AI-ready teams need to develop critical thinking skills – the ability to analyse AI outputs, identify anomalies or biases, and make well-informed decisions relating to its use. As organisations increasingly use AI-based systems, there is a risk of over-reliance and trust on its output, without truly understanding how the outcomes are derived. This is where critical thinking becomes indispensable. Building internal capabilities in 'data scepticism', or the ability to challenge assumptions, examine how models are trained, and identify potential errors, anomalies or biases in the output, is critical for organisations. Although a certain level of technical competency may be required to deep dive into the AI-system capabilities, a basic level of confidence to raise concerns and questions across all teams interacting with AI solutions and outputs will be essential for organisations. Deep technical training is not required for this. More importantly, leadership teams should prioritise building an organisational culture where employees are encouraged to question and analyse AI- generated insights. For example, establishing scenario-based exercises, diverse team discussions and formalised feedback loops will help sharpen critical thinking skills across the organisation. Human-machine collaboration As the capabilities of AI-based technologies rapidly advance, the question of whether to replace human resources with AI is becoming increasingly dominant in the global business landscape. In recent months, we have seen several global organisations make headlines as the decision to replace laid-off workers with AI and automation takes centre stage. This includes brands such as Klarna, UPS, Duolingo, Google and Salesforce, among many others. In my experience, the integration of new technologies does not automatically mean replacing people. As we have observed over decades of industrial revolution, technology enables shifts in working environments, taking over tasks and pushing human resources to more complex or different types of work. Albeit AI development is significantly more rapid and its capabilities enable more sophisticated tasks, the cycle of shifting work remains the same. In the AI age, this means creating new kinds of teams where humans and intelligent systems collaborate effectively to deliver cohesive and sophisticated work at an accelerated pace. To support this, companies should focus on role redesign, process mapping, and experimentation with AI tools in real workflows. Encourage cross-functional collaboration - between business, tech, and data teams - to break down silos and co-create solutions. The key is to help people see AI as an assistant, not a threat. Ethical reasoning and responsible innovation With the rise of AI application in business comes a surge of ethical concerns and risks, including bias, data privacy and over-reliance of AI for critical decision making. To leverage AI-based technologies effectively, organisations cannot afford to overlook these concerns, particularly considering the developing regulatory scrutiny and fragility of consumer trust. Every team should receive education and training on the ethical concerns and challenges of AI application in business, including the ability to recognise biases in data and outputs, understanding explainability requirements, and making inclusive decisions. Responsible use of AI should be a foundational part of the organisational culture. Realistically, this goes beyond formal training programs to enable successful adoption in organisations. Transparent communication, open dialogue, best practices and use cases are needed to explore potential unintended consequences and ensure responsible use is top of mind for all teams. Ethical reasoning should not be designed to slow innovation, but ensure that it is able to flourish within the space of safe and responsible use for the business. Adaptive learning and growth mindset One of the most foundational skills for an AI-ready team is adaptability. Exponential technologies, particularly AI, are developing rapidly and constantly changing. The most valuable skill in an AI-ready organisation is not the knowing everything, but being curious, open to change and continuously willing to learn. Embedding this growth mindset in how teams work and collaborate gives employees permission to explore new capabilities, learn quickly from failure, and experiment with new tools and solutions within a safe environment. In the current AI age, organisations need to prioritise investments in microlearning platforms that are able to encourage continuous rapid learning, knowledge sharing and reward curiosity. Critically, leadership teams should model this mindset, demonstrating the willingness to evolve and rethink traditional assumptions and limitations. Adaptability will ensure the organisation does not just survive the era of AI transformation, but thrives in it. AI-readiness goes beyond training programs, certifications and tools proficiency. It is truly a team- wide capability that requires sustainable investment in people. The future of work is not only impacted by the rapid development of AI, but how intelligently organisations are able to prepare the workforce to embrace it responsibly.

Apple heads into annual showcase reeling from AI missteps, tech upheaval and Trump's trade war
Apple heads into annual showcase reeling from AI missteps, tech upheaval and Trump's trade war

Washington Post

timean hour ago

  • Washington Post

Apple heads into annual showcase reeling from AI missteps, tech upheaval and Trump's trade war

CUP — After stumbling out of the starting gate in Big Tech's pivotal race to capitalize on artificial intelligence, Apple will try to regain its footing Monday at its annual Worldwide Developers Conference. The pre-summer rite, which attracts thousands of developers to Apple's Silicon Valley headquarters, is expected to be more subdued than the feverish anticipation that surrounded the event during the previous two years.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store