22-04-2025
How To Prompt The New ChatGPT, According To OpenAI
How to prompt the new ChatGPT, according to OpenAI
The latest version of ChatGPT is significantly more powerful but requires new prompting techniques. The model now follows instructions more literally and makes fewer assumptions about what you're asking for. This matters for entrepreneurs using the tool.
Don't build on outdated advice. Don't prompt using subpar words. You're better than that.
Poorly constructed prompts waste your time and money. Get it right, and you unlock a significantly more capable AI. OpenAI team members Noah MacCallum and Julian Lee have released extensive documentation for how to prompt their new models.
Here's a summary of their prompting guidance, so you can get the most out of the tool.
Prompting techniques that worked for previous models might actually hinder your results with the latest versions. ChatGPT-4.1 follows instructions more literally than its predecessors, which used to liberally infer intent. This is both good and bad. The good news is ChatGPT is now highly steerable and responsive to well-specified prompts. The bad news is your old prompts need an overhaul.
Most people still use basic prompts that barely scratch the surface of what's possible. They type simple questions or requests, then wonder why their results feel generic. OpenAI has now revealed how they trained the model to respond, helping you get exactly what you want from their most advanced models.
Start by organizing your prompts with clear sections. OpenAI recommends a basic structure with specific components:
• Role and objective: Tell ChatGPT who it should act as and what it's trying to accomplish
• Instructions: Provide specific guidelines for the task
• Reasoning steps: Indicate how you want it to approach the problem
• Output format: Specify exactly how you want the response structured
• Examples: Show samples of what you expect
• Context: Provide necessary background information
• Final instructions: Include any last reminders or criteria
You don't need all these sections for every prompt, but a structured approach gives better results than a wall of text.
For more complex tasks, OpenAI's documentation suggests using markdown to separate your sections. They also advise using special formatting characters around code (like backticks, which look like this: `) to help ChatGPT distinguish code from regular text, and using standard numbered or bulleted lists to organize information.
Separating information properly affects your results significantly. OpenAI's testing found that XML tags perform exceptionally well with the new models. They let you precisely wrap sections with start and end tags, add metadata to tags, and enable nesting.
JSON formatting performs poorly with long contexts (which the new models provide), particularly when providing multiple documents. Instead, try formats like ID: 1 | TITLE: The Fox | CONTENT: The quick brown fox jumps over the lazy dog which OpenAI found worked well in testing.
ChatGPT can now function as an "agent" that works more independently on your behalf, tackling complex tasks with minimal supervision. Take your prompts to the next level by building these agents.
An AI agent is essentially ChatGPT configured to work through problems autonomously instead of just responding to your questions. It can remember context across a conversation, use tools like web browsing or code execution, and solve multi-step problems.
OpenAI recommends including three key reminders in all agent prompts: persistence (keeping going until resolution), tool-calling (using available tools rather than guessing), and planning (thinking before acting).
"These three instructions transform the model from a chatbot-like state into a much more 'eager' agent, driving the interaction forward autonomously and independently," the team explains. Their testing showed a 20% performance boost on software engineering tasks with these simple additions.
The latest ChatGPT can handle an impressive 1 million token context window. The capabilities are exciting. According to OpenAI, performance remains strong even with thousands of pages of content. However, long context performance degrades when complex reasoning across the entire context is required.
For best results with long documents, place your instructions at both the beginning and end of the provided context. Until now, this has been more of a fail safe rather than a required feature of your prompt.
When using the new model with extensive context, be explicit about whether it should rely solely on provided information or blend it with its own knowledge. For strictly document-based answers, OpenAI suggests explicitly instructing: "Only use the documents in the provided External Context to answer the User Query."
While GPT-4.1 isn't designed as a reasoning model, you can prompt it to show its work just as you could the older models. "Asking the model to think step by step (called 'chain of thought') can be an effective way to break down problems into more manageable pieces," the OpenAI team notes. This comes with higher token usage but delivers better quality.
A simple instruction like "First, think carefully step by step about what information or resources are needed to answer the query" can dramatically improve results. This is especially useful when working with uploaded files or when ChatGPT needs to analyze multiple sources of information.
OpenAI has shared more extensive information on how to get the most from their latest models. The techniques represent actual training objectives for the models, not just guesswork from the community. By implementing their guidance around prompt structure, delimiting information, agent creation, long context handling, and chain-of-thought prompting, you'll see dramatic improvements in your results.
Success with ChatGPT comes from treating it as a thinking partner, not just a text generator. Follow the guidance directly from the source for better results from the same model everyone else is using.
Access all my best ChatGPT content prompts.