
Unable To Plan In 2025? Use AI To ‘Leave No Scenario Behind'
Why are businesses unable to plan?
The global leaders provided several concurrent challenges that make planning difficult:
Said one senior executive recently, 'We used to have a core scenario in place with a handful of back-ups, but now we need to have literally hundreds of options on the table and know which one to follow at any given time. And the answer can change daily or weekly and vary by product line or country.'
The role of scenario analysis: Rehearsing the future
Peter Schwartz, a pioneer of scenario planning and author of The Art of the Long View, likened the use of scenarios to 'rehearsing the future.' Similar to rehearsing a theater production, the process of scenario development historically required a collaborative effort of numerous individuals and several days, weeks, or months of refinement before the scenarios were ready for their intended audience. This traditional approach to scenario development generally was time-consuming and resource intensive.
The role of AI in scenario planning: 'No Scenario Left Behind'
Recently in Silicon Valley, PruVen Capital Managing Partner Ramneek Gupta shared the concept of 'no scenario left behind.' He and his colleagues have been studying advances in scenario planning and funding solutions that could enable business leaders to leverage advanced AI such as large language models (LLMs) and large geotemporal models (LGMs). LGMs use frameworks that analyze and reason across both time and space to exhaustively simulate virtually any and every event and scenario. These AI models provide dynamic risk modeling and real-time simulations for a vast array of business scenarios, allowing business leaders to address the inability to plan.
WTW's Jessica Boyd and Cameron Rye explain in a recent article that advances in generative AI tools have enabled the rapid generation of numerous scenario narratives across a wide range of disciplines. These models accelerate the traditional, resource-heavy process of scenario development, streamlining the steps while introducing novel perspectives that might be missed by human analysts. They help overcome the limitations of human imagination that occur when people overlook or underestimate potential risks that have not yet happened in historical data. This can reduce potential blind spots that otherwise leave organizations vulnerable to highly disruptive events.
Already, AI breakthroughs have enabled the next stage of scenario planning using advanced language models in areas such as weather forecasting, including hurricane landfall predictions, as well as political and economic modeling. These models provide the opportunity to expand beyond the traditional exploratory scenarios that most businesses currently use. For example, normative scenarios (similar to a reverse stress test) can add significant value when they are built around specific business objectives.
Further, within the UK and Europe, new regulations focused on financial institutions have sparked considerable attention on scenario testing (in the U.K.: Operational Resilience 2025 and in the EU: Digital Operational Resilience Act (DORA)). These rules have further increased the importance of well-developed and defined scenarios, including scenario testing with third parties.
How to start scenario planning and conducting an impact analysis
Recently, WTW's Laura Kelly explained how scenario building and impact analysis have become a crucial part of business planning and risk management. She suggests three key steps in scenario planning and impact analysis:
Effective leaders are not halted by uncertainty but rather mobilize around it. They identify the broad range of scenarios that might occur in a given set of circumstances, prioritize the greatest risks as well as the solutions that can mitigate these risks, and enable the company to thrive.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
4 minutes ago
- TechCrunch
AI-powered stuffed animals are coming for your kids
In Brief Do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids? That's how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times' Amanda Hess has some reservations. She recounts a demonstration in which Grem, one of the offerings from startup Curio, tried to bond with her. (Curio also sells a plushie named Grok, with no apparent connection to the Elon Musk-owned chatbot.) Hess writes that this is when she knew, 'I would not be introducing Grem to my own children.' As she talked to the chatbot, she became convinced it was 'less an upgrade to the lifeless teddy bear' and instead 'more like a replacement for me.' She also argues that while these talking toys might keep kids away from a tablet or TV screen, what they're really communicating is that 'the natural endpoint for [children's] curiosity lies inside their phones.' Hess reports that she did, eventually, let her kids play with Grem — but only after she'd removed and hidden the voice box. They still talked to it and played games with it; then they were ready for some TV.


Forbes
5 minutes ago
- Forbes
Are Browsers Key To An Agentic AI Future? Opera, Perplexity Think So
AI-powered conversational search engine Perplexity is in the news for offering to buy Google's Chrome browser for $34.5 billion. But in December of 2024, Perplexity considered buying The Browser Co. And just months ago, Perplexity reportedly offered to buy Brave, the privacy-focused browser, for about $1 billion. Why does Perplexity want a web browser so badly? Possibly because a browser just might be key to our agentic AI future. I recently interviewed Opera senior product leader Henrik Lexow on my TechFirst podcast. Opera, the 30-year-old browser company that pioneered tabbed browsing, pop-up blocking, and ad blockers, has about 300 million active users globally. This year, Opera was the first to bring AI agents right into our browsers in a project called Opera Neon. 'The agentic browser … is that sort of the new operating system?' Lexow asked during the podcast. 'It's a big question.' Regardless of where the browser goes, Opera's pretty sure about the future of the internet itself. That's a huge shift, by the way. An agentic internet would be a massive and fundamental change from a user-driven internet to an agent-driven version. In a user-driven internet, you search, you see results, you make selections, you click links, you fill out forms, you book flights, and you buy products. In a sense, you are the agent. In an agentic internet, you tell something – maybe your agentic browser – to do those things for you. Except you don't say search, look, select, and buy; you say buy me more of the underwear I got six months ago. The agent then consults its memory, forms a plan, takes multiple steps, and handles it all: from which underwear you bought, and where you bought them, to finding the same ones online (and maybe checking around for better prices), to adding them to cart, to checking out … and reporting back to you with the results. The agent – in this case potentially an agentic browser – is therefore essentially a personal assistant, a force multiplier. But will an agentic browser be the main way we engage with agents? Perplexity seems to think it's pretty important, given the company's persistent and repeated but so far unfruitful attempt to buy a browser. Opera certainly thinks so, if only because Opera has a browser, and a very innovative one at that. Opera launched AI in a browser back in 2023 in a project called ARIA. ARIA enabled contextual interactions within web pages in a GPT-based chat interface. Over time, that's evolved to a tripartite strategy under the Opera Neon brand: The reality is that for many of us, most of our work happens in a browser. I'm writing this story in a browser. I recorded the interview in a browser. I've researched Opera and Perplexity in a browser. I made episode art for the podcast in a browser (thanks, Canva). Opera's thesis is that having agents embedded where you work makes them vastly more useful: they have access to your history, to your work, to your sites and apps. Important note: the Neon agentic browser's AI lives locally on your hardware, making it your agent, not Opera's, and not your employer's. This should boost your privacy, which is critical if you're going to give an agent access to very personal information including, likely, your credit card. Of course, this is just one vision of the future. Apple with Siri, as justly maligned as it is, would have another vision. Google, with Gemini and its own vast fleet of Android-enabled phones, would have another. Microsoft's Copilot is another. And OpenAI, which has ChatGPT apps for mobile devices as well as full computers, might have yet another vision of how we'll integrate AI into our lives and work. So whether the browser will be the locus of our agentic AI future or not is yet to be determined. Remember the old proverb: every problem looks like a nail to the person who only has a hammer. But it seems like a fairly good bet to me.
Yahoo
33 minutes ago
- Yahoo
MIT Student Drops Out Because She Says AGI Will Kill Everyone Before She Can Graduate
We're all probably feeling a little anxious about AI. It's horrible for the environment, is used as an excuse to fire workers, floods the internet with misinformation and slop, entrenches government surveillance, and appears to be driving people into psychosis. And so, at a time when many college students are dropping out to join AI startups, one former MIT student says she called it quits because she's afraid of something altogether more catastrophic: that an artificial general intelligence (AGI), or superhuman AI, will completely wipe out the human race, Forbes reports. "I was concerned I might not be alive to graduate because of AGI," Alice Blair, who enrolled at the university in 2023, told the publication. "I think in a large majority of the scenarios, because of the way we are working towards AGI, we get human extinction." Blair now works as a technical writer at the nonprofit Center for AI Safety, and has no plans to go back to MIT. She joined, hoping to meet other people interested in making AI safe, and apparently was disappointed. "I predict that my future lies out in the real world," she told Forbes. Nikola Jurković, a Harvard alum who served at his school's AI safety club, is sympathetic to the idea. "If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career," he told Forbes. "I personally think AGI is maybe four years away and full automation of the economy is maybe five or six years away." Building an AGI, a system that matches or surpasses human intelligence, is much of the AI industry's endgame. OpenAI CEO Sam Altman called the recent launch of its poorly-received AI model GPT-5 a major stepping stone towards AGI, even going as far as to call it "generally intelligent." Many experts are skeptical, however, that we're anywhere near building such a powerful AI model, and point to recent signs that improvements to the tech are hitting a wall. "It is extremely unlikely that AGI will come in the next five years," Gary Marcus, an AI researcher and outspoken critic of the industry, told Forbes. "It's just marketing hype to pretend otherwise when so many core problems (like hallucinations and reasoning errors) remain unsolved." And while there are very real forms of harm that AI can cause, outright extinction is a little far-fetched, Marcus said. In fact, the AI industry probably wants you to buy into AI doomsday prophesying. Altman and other tech CEOs raise these risks themselves. Doing so creates the impression that the tech is far more capable than it currently is, and allows these companies to control the narrative around how the tech should be regulated. And if you're envisioning an apocalypse like in "The Matrix" movies — human enslavement at the hands of machine intelligences that rebelled against their creators — you may be ignoring the very mundane forms of harm it's already causing, like job automation and the gutting of the environment. More on AI: Sam Altman Calls His Own AI Model "Annoying" After Being Forced to Raise It From the Dead Solve the daily Crossword