
How to run a strategy pre-mortem with ChatGPT's o3 model
{{^loggedIn}} {{/loggedIn}}
{{#loggedIn}} {{/loggedIn}}
Next Story
Jaspreet Bindra ,Anuj Magazine Let's say you are a project manager preparing to launch a product or a strategy. And you want a peek into all the ways it could fail. Imagine you could look into the future to see all the ways your product or strategy could fail. (Pixabay) Gift this article
Let's begin with the term 'pre-mortem'. Unlike a post-mortem that analyses the reasons behind a particular outcome, a pre-mortem is a structured exercise done before you launch a major initiative. Everyone imagines the strategy has failed badly two years in the future and works backward to list the reasons. The team then turns those hypothetical failure causes into risk-mitigation actions or design changes while there's still time.
Let's begin with the term 'pre-mortem'. Unlike a post-mortem that analyses the reasons behind a particular outcome, a pre-mortem is a structured exercise done before you launch a major initiative. Everyone imagines the strategy has failed badly two years in the future and works backward to list the reasons. The team then turns those hypothetical failure causes into risk-mitigation actions or design changes while there's still time. Why is doing pre-mortems so hard?
Because it's not just logic at play—it's ego, politics, and fear. Leaders hesitate to run pre-mortems because they're already emotionally invested in the strategy. Confirmation bias creeps in—we look for reasons it'll work, not why it might fail. And then there's the fear factor: calling out what could go wrong can feel like you're betting against the team. So the session becomes a formality. Risks are raised, maybe even nodded at, but rarely owned or acted upon.
The tool to use: ChatGPT o3 model. Access via https://chatgpt.com/ Example:
A chief strategy officer at a telecom firm greenlights a bold expansion into the Asia-Pacific market using AI-driven cybersecurity. Before execution, she runs a pre-mortem with OpenAI's o3 model using the following prompt: Assume that the following strategy, which was selected as the most promising option after red teaming and simulation, has failed spectacularly two years after implementation.
Strategy chosen: [[Insert final strategy description here]]
Your task is to conduct a pre-mortem analysis—working backward from failure to identify what could have gone wrong.
Critically evaluate and respond to the following:
1. What were the early warning signs we missed or ignored?
2. What flawed assumptions turned out to be false?
3. Which internal weaknesses—talent, systems, incentives, org structure—amplified the failure?
4. What external shocks (market, regulation, geopolitical, tech evolution) derailed the strategy?
5. Where did execution break down (timing, leadership, resourcing, dependencies)?
6. Which stakeholders (clients, partners, employees) resisted or disengaged, and why?
7. What feedback loops or course-correction mechanisms were missing or underused?
8. If you could go back, what 3 specific safeguards or contingency plans would you embed in the strategy before launch?
Be brutally honest. Your goal is not to defend the strategy but to make it failure-proof. What makes ChatGPT o3 special?
1. Advanced reasoning capabilities: The o3 model excels in complex tasks requiring step-by-step logical reasoning.
2. Multimodal integration: o3 seamlessly combines text and visual data, allowing it to interpret and reason about images, charts, and graphics within its analytical processes.
3. Real-time tools access: The model incorporates live tools usage into its reasoning, enabling it to extend its capabilities at the inference time. Also read
How to crack research papers at breakneck speed
How to identify if an image is generated by ChatGPT
Mastering complex research papers faster with NotebookLM's Mind Maps
Eliminating repetitive phrases in ChatGPT responses
Note: The tools and analysis featured in this section demonstrated clear value based on our internal testing. Our recommendations are entirely independent and not influenced by the tool creators.
Jaspreet Bindra is co-founder and CEO of AI&Beyond. Anuj Magazine is also a co-founder. Topics You May Be Interested In Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
33 minutes ago
- Hindustan Times
OpenAI launches O3-Pro to tackle complex questions: Here's how it works
OpenAI has announced a new version of its language model called O3-Pro. This model is made to better handle complex questions and follow instructions more carefully. It is now available to ChatGPT Plus users, as well as to business and API users. O3-Pro is the first release under OpenAI's new naming system, which replaces version numbers with a monthly format. The 'O' marks it as the June 2025 release. This change supports more regular updates. The model is part of the GPT-4 group and focuses on improving reasoning, maintaining context, and producing more consistent responses. Earlier versions sometimes struggled with multi-step logic or unclear instructions. O3-Pro addresses these issues by responding more accurately to detailed prompts. It is now the default for ChatGPT Plus users and requires no special setup. Future updates will follow the same release approach, providing the latest version automatically. Those who have used the model note improvements in how it handles longer conversations. It seems to better understand the intent behind complex questions, helping reduce confusion and follow up corrections. For example, O3-Pro is better at following step-by-step instructions, such as legal queries that unfold over multiple parts, without losing track of earlier details. Previous models often required users to repeat or rephrase inputs in such scenarios. According to OpenAI, O3-Pro was built to 'handle more complex reasoning with better consistency,' with training focused on reducing errors and improving clarity across a broader range of topics. These updates are particularly useful in fields like research, law, and technical writing, where accuracy and deeper understanding are important. O3-Pro can support a range of tasks, including writing, coding, reports, and data analysis. OpenAI notes that it now handles more types of input with fewer mistakes, which may help teams working with technical or structured content. Despite improvements, OpenAI acknowledges that O3-Pro can still produce incorrect or unclear answers, especially with recent or fast changing information. Human review remains important in such cases. The release comes at a time when other companies are also launching AI models, some with features like memory or access to live data. OpenAI continues to focus on improving general purpose tools through regular updates. How O3-Pro performs in everyday use may influence what users expect from AI tools in the future.


India Today
an hour ago
- India Today
OpenAI and Google are joining hands because ChatGPT needs more servers
OpenAI, the company behind ChatGPT, has struck a deal with Google to use its cloud computing services. That is a surprising development given that the two are direct competitors in the artificial intelligence space. The move, which was finalised in May after months of discussions, according to a report by Reuters, aims to help OpenAI cope with its growing demand for computing power – known in the industry as "compute" – as the popularity of its AI models continues to move marks a significant shift in how AI rivals are navigating the resource-heavy landscape of AI development. OpenAI, long supported by Microsoft and its Azure cloud platform, is now diversifying its infrastructure partnerships. The company recently announced collaborations with Oracle and SoftBank on the $500 billion Stargate project and has reportedly signed deals worth billions with CoreWeave for additional OpenAI and several US tech companies also partnered with G42, an Emirati AI company, in an effort to build a massive data centre complex in Abu Dhabi. The facility, which will be part of a larger initiative called Stargate UAE (under the Project Stargate), is expected to become one of the world's largest AI computing hubs. These hubs have servers powered by advanced chips from companies like Nvidia and operated by Oracle, Cisco, and Reportedly, Google Cloud will now support the training and deployment of OpenAI's models, including ChatGPT. Google's Cloud unit, which made $43 billion in sales in 2024, will benefit from this unexpected partnership, further solidifying its growing position as a neutral cloud provider for AI firms – even those in direct competition with Google being fierce competitors – ChatGPT is also widely seen as a threat to Google's dominance in search – the two companies have decided to put rivalry aside, at least for now, to solve a more immediate problem: capacity. OpenAI's annualised revenue run rate hit $10 billion in June, according to the company, thanks to a surge in users. However, that growth has brought its own problems. Demand for AI-generated content has been growing. In April 2025, OpenAI had launched a new image feature that allowed users to create visuals in the style of Studio Ghibli. That overwhelmed OpenAI's infrastructure. The image tool became so popular that CEO Sam Altman joked the company's 'GPUs are melting.'Additionally, on Tuesday, June 10, OpenAI suffered a global outage that affected ChatGPT across web and mobile platforms. While services like Sora and Playground have largely recovered, OpenAI's status report continued to show elevated error rates for ChatGPT even a day later. This was the third major outage that ChatGPT faced this year. Moving away from MicrosoftadvertisementOpenAI's decision to work with Google is part of a broader strategy to reduce its dependence on Microsoft, which until January had the exclusive right to provide data centre support via Azure. The companies are now negotiating new investment terms, including Microsoft's future stake in Google has opened access to its in-house chips known as tensor processing units (TPUs), previously used only for internal operations. This has allowed Google Cloud to attract major clients like Apple, as well as AI startups including Anthropic and Safe Superintelligence – both of which were founded by former OpenAI has an increased appetite for computeAs the competition heats up, tech companies are pouring money into AI infrastructure. Alphabet, Google's parent company, expects to spend $75 billion this year on AI-related capital investments. While this fuels innovation, it also puts pressure on Google to prove that its AI offerings can generate strong financial returns. Selling compute power to OpenAI, though helpful to Google's cloud ambitions, also means dividing resources between its enterprise customers and its own in-house products, including the DeepMind AI CFO Anat Ashkenazi admitted in April that the company was already struggling to meet existing cloud demands. The addition of OpenAI could make resource allocation even more complicated, especially as OpenAI continues to grow. OpenAI's own chipsTo further manage its compute needs, OpenAI is reportedly in the final stages of designing its own AI chips. The chip, which is to be manufactured by TSMC, is expected to go into mass production by 2026. The goal is to reduce reliance on Nvidia, whose hardware currently powers the majority of AI systems a custom chip would allow OpenAI to better integrate its hardware and software, ultimately improving performance. But it's a costly and time-consuming process, which reportedly is estimated to cost $500 million for a single chip version, with overall expenses likely to double once software and infrastructure are included.
&w=3840&q=100)

First Post
3 hours ago
- First Post
Which secret JFK Files can US make public? Trump's intelligence chief asked AI to answer
US intelligence chief Tulsi Gabbard has revealed that she relied on artificial intelligence (AI) to decide which documents related to the assassination of former President John F Kennedy should be made public and which files should continue to remain classified. read more Tulsi Gabbard might be the US intelligence chief, but it is artificial intelligence (AI) applications that appear to be calling shots. Gabbard on Tuesday said that she relied on AI to tell her which documents in the 'JFK files' should remain classified and which should be made public. In March, the Donald Trump administration released over 2,200 files running into more than 63,000 pages related to the assassination of former President John F Kennedy, who was shot dead on November 22, 1963, during a roadshow in Dallas, Texas. STORY CONTINUES BELOW THIS AD Speaking at the Amazon Web Services (AWS) conference, Gabbard said that she fed all the documents related to Kennedy's assassination, referred to as JFK files, to assess which documents to make public. 'A couple of examples of the application of AI and machine learning that we've already used in this Director's Initiative Group has been around declassification. We have released thousands, tens of thousands of documents related to the assassinations of JFK and Senator Robert F Kennedy, and we have been able to do that through the use of AI tools far more quickly than what was done previously,' said Gabbard. Gabbard's admission has come at a time when risks related to artificial intelligence (AI) and vulnerabilities related to technology are under increased scanner. Companies owning chatbots or other AI applications, such as OpenAI or DeepSeek, can retain data and files that users feed. If Gabbard has fed data into a private chatbot, all those government secrets are now set to be stored in the chatbot's server or backend. Such irresponsible usage of technology would not be the first in the Trump administration. Earlier this year, it emerged that top officials, including Vice President JD Vance, Defence Secretary Pete Hegseth, and then-National Security Advisor (NSA) Mike Waltz, were discussing top-secret operational plans on messaging application Signal. Even though Trump defended the usage and everyone denied any wrongdoing, Trump later sacked Waltz as NSA. STORY CONTINUES BELOW THIS AD Gabbard further said that the Director's Initiatives Group (DIG), her pet project, is using AI to process open-source information. Gabbard said, 'Ten thousand hours of media content, for example, that normally would take eight people 48 hours to comb through, now takes one person one hour through the use of some of the AI tools that we have here. So, those are a few of many examples that this Director's Initiative Group is focused on. Again, not only for ODNI [Office of the Director of National Intelligence], but really for us to be able to provide these efficiencies and these tools across the entire enterprise.' Gabbard further said that there is now a dedicated chatbot for the US intelligence community that comprises 18 agencies like the Central Intelligence Agency (CIA), National Security Agency (NSA), Federal Bureau of Investigation (FBI), etc.