
ChatGPT-5 is coming — here's how it could change the way we prompt forever
As we use chatbots to solve complex math problems, making travel plans or even just leaning on it for creative brainstorming, the way you structure your prompt can make or break the result. And with GPT-5 on the horizon, 'prompting' might start to feel less like typing or voicing a command and more like working alongside a smart partner with reasoning skills.
When we take a look back at previous versions like early GPT-4, prompt structure was fairly forgiving. You could ask for a list, a summary or a poem, and the chatbot usually delivered something usable.
But for more complex problems such as multi-step logic, tradeoffs or planning, the responses could become muddled or jump to conclusions. Even hallucinate at times.
But now, we may see a shift to a subtle, but more powerful prompting tactic of step-by-step or chain-of-thought prompting. This style cues the model to break down its reasoning, which makes the results noticeably better.
While ChatGPT-5 is still not available to the public, we can take a look at chain-of-thought prompting with current chatbots to see how more structured prompting produces a more accurate and useful response. As ChatGPT evolves, it's safe to assume other chatbots will follow.
Prompt 1: "What's the best U.S. city to visit in November?"Prompt 2: "Let's think step by step about the best U.S. city to visit in November, considering weather, events, and budget."
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
I gave both prompts to Claude 4 Sonnet and you can clearly see the depth and detail of the response in the second.
The chatbot gave me a full reasoning trail while the first prompt gave a list of cities. Both good, but one much more helpful.
Prompt 1: 'Give me a cheap, healthy meal plan for the week.'Prompt 2: 'Create a step-by-step weekly meal plan that's affordable and healthy. Start by selecting 3-4 base ingredients I can use in multiple meals, then build out breakfast, lunch, and dinner options. Keep costs low and nutrition high.'
While the chain-of-thought reasoning takes more initial effort, the outcome is worth the extra effort.
You can see in this example, Gemini was exceptionally detailed in the second prompt by highlighting key base ingredients and why those were chosen.
The first prompt also delivered a healthy and cheap meal plan, but far less nuanced and sophisticated.
Prompt 1: 'Jane has twice as many apples as Tom. Together they have 18 apples. How many does each person have?'Prompt 2: 'Let's solve this step by step. Jane has twice as many apples as Tom, and together they have 18. First, define variables, then write an equation and solve it.'
Even in the first prompt, ChatGPT already offered a basic step-by-step breakdown. The chain-of-thought prompt included more, including defining the variables, which is something we are sure to see even more with ChatGPT-5.
OpenAI has hinted that its next major release will focus heavily on reasoning and planning, and the addition of agent-like abilities means models will go beyond basic response generation to make decisions, execute actions and pause to reflect or think.
That means the way we prompt will matter more than ever. We'll likely see greater rewards for structured prompts (step-by-step thinking, goal-based requests, conditional instructions).
ChatGPT-5 might anticipate these behaviors and begin prompting us back in more collaborative ways.
In other words, asking "What should I do this weekend?" may soon feel outdated. Instead, we might start saying: "Here are my preferences, budget and time. Let's plan a weekend step by step. Make suggestions and I'll approve."
If you want to get ahead of the curve, here are a few strategies already working well with GPT-4.5 and likely to shine with GPT-5:
ChatGPT-5 will undoubtedly make AI smarter, but how we talk to it will still matter.
Step-by-step prompting is the secret to collaborating better with AI and to get the most out of it. As chatbots become more human-like, the best way to utilize their abilities is to ask them to think like one, one step at a time.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
14 minutes ago
- Yahoo
Macronix Introduces Cutting-Edge Secure-Boot NOR Flash Memory
ArmorBoot MX76 Features Ultra-Fast Boot Times, High Performance, Integrated Authentication Protection, Data Integrity Verification, Secure Update, SPI Interface, Low-Power Options HSINCHU, Aug. 6, 2025 /PRNewswire/ -- Macronix International Co., Ltd. (TSE: 2337), a leading integrated device manufacturer in the non-volatile memory (NVM) market, today announced ArmorBoot MX76, a robust NOR flash memory combining in a single device, the essential performance and an array of security features that deliver rapid boot times and iron-clad data protection. The newest member of Macronix's highly successful and widely deployed ArmorFlash™ family, ArmorBoot MX76 is a high-performance, highly secure flash memory designed for artificial intelligence (AI), Internet of Things (IoT), automotive electronics and other demanding applications that require ultra-fast boot times, integrated authentication protection, data integrity verification, secure flash updates, a Serial Peripheral Interface (SPI), power options of 3V or 1.8V, and capacities up to 1 Gb. "We developed ArmorBoot MX76 to address specific challenges designers in the AI, IoT and automotive-electronics markets face: achieving fast boot times, enhanced security and seamless updates," said F.L. Ni, vice president of marketing at Macronix International. "Those, along with other rapidly growing markets – industrial automation, healthcare and wireless communications, for example -- represent broad spectrum of system and device manufacturers demanding the greater performance and heightened security that ArmorBoot MX76 provides. In an age when cybersecurity is no longer a luxury but a necessity, and boot times are increasingly consequential to products' success, Macronix is delivering the highest level of security and performance in flash memory." ArmorBoot MX76's unique architecture sets it apart from previous secure NOR flash solutions in that it combines in-demand performance and security functions in a single device; no additional chips are needed to ensure required levels of data protection. During a system's critical startup or update phase, the memory's performance and security features kick in automatically and seamlessly. In addition to being a single-device solution for ultra-fast booting and robust security, ArmorBoot MX76's SPI interface enables smooth integration into new or existing systems. Top worldwide manufacturers who've already designed ArmorFlash memories into their AI, IoT and automotive-focused solutions are presently evaluating ArmorBoot to expand their product portfolios to offer enhanced performance and secure-boot capabilities. ArmorBoot's Key Features Support for the secure boot process without exposing code on the data bus Device authentication Unmatched array of security features that ensure data protection SPI interface for seamless design integration Ideal flash solution for AI, IoT, automotive, healthcare, communications, industrial systems 3V or 1.8V for optimal power efficiency Capacities up to 1Gb – suitable for targeted applications Broad Array of Secure NOR Flash Macronix's award-winning ArmorFlash family, of which ArmorBoot MX76 is the latest addition, features a broad array of NOR flash memories developed for a wide range of applications demanding data security. It features security schemes such as Physical Unclonable Function, or PUF, and unique ID, with authenticated and encrypted links for NOR, NAND or flash. ArmorFlash has achieved several safety and security certifications, including from standards bodies focused on automotive electronics and cybersecurity. For more information and specifications on ArmorFlash and ArmorBoot, go to Availability ArmorBoot MX76 is sampling now, with production scheduled for later in 2025. Showcased at FMS Macronix will showcase ArmorBoot, along with a range of other NVM solutions, at the Future of Memory and Storage conference, August 5-7, at the Santa Clara Convention Center, in Santa Clara, California. Macronix will be located in booth #1142. About Macronix Macronix, a leading integrated device manufacturer in the non-volatile memory (NVM) market, provides a full range of NOR Flash, NAND Flash, and ROM products. With its world-class R&D and manufacturing capability, Macronix continues to deliver high-quality, innovative and performance-driven products to its customers in the consumer, communication, computing, automotive, networking and other market segments. Find out more at ArmorFlash is a trademark of Macronix International Co., Ltd. eMMC is a trademark of JEDEC/MMCA View original content: SOURCE Macronix Sign in to access your portfolio


CNET
16 minutes ago
- CNET
Trump Media Is Testing an AI Search Engine Powered by Perplexity
President Donald Trump's media company, Trump Media, is beta-testing a new AI search feature, Truth Search AI, on the Truth Social platform. The Florida-based company announced the news on Wednesday in a press release. Trump Media and Technology Group is perhaps best known for its social-media program Truth Social. The company is separate from the New York-based Trump Organization. "We're proud to partner with Perplexity to launch our public Beta testing of Truth Social AI, which will make Truth Social an even more vital element in the Patriot Economy," Trump Media CEO Devin Nunes said in the statement. "We plan to robustly refine and expand our search function based on user feedback as we implement a wide range of additional enhancements to the platform." Truth Search AI is now available on the Web version of Truth Social and will begin public beta testing on the Truth Social iOS and Android apps at an unnamed future date. Representatives for Trump Media and Perplexity didn't immediately respond to a request for comment. Will results be politically biased? In today's divided political landscape, one immediate concern is that a search engine from a conservative president's media company will select only search results that favor conservative opinions. UAE state-owned newspaper The National conducted searches using the new product and reported that the AI-generated answers, perhaps unsurprisingly, source conservative-leaning media outlets. But 404Media was able to get some possibly surprising results. When reporters asked how the American economy is doing, the new search engine said it was "currently facing significant headwinds, with signs of slowdown." The media outlet pressed further, asking if the president's international tariffs are to blame. "Recent tariff increases in the United States have generally had a negative effect on economic growth and employment, raising costs for businesses and consumers while providing only limited benefits to some manufacturing sectors," Truth Search AI replied. Read more: What Is Perplexity? Here's Everything You Need to Know About This AI Chatbot Perplexity's history San Francisco-based Perplexity was founded in 2022. As CNET noted in a review, it calls itself the world's first "answer engine," and instead of showing a list of links, it pulls info directly from the sources and summarizes that information. The company has made headlines for how it acquires its content. In June, the BBC threatened to sue Perplexity for unauthorized use of its content, alleging the artificial intelligence company reproduced BBC material "verbatim." At the time, Perplexity gave a statement to the Financial Times calling the BBC's claims "manipulative and opportunistic" and that the broadcasting giant fundamentally doesn't understand how the technology, internet or IP law works. Perplexity also alleged that the threat of litigation shows "how far the BBC is willing to go to preserve Google's illegal monopoly for its own self-interest." As 404Media notes, Forbes, the New York Times, New York Post and the Dow Jones have all accused Perplexity of plagiarism, and News Corp's Dow Jones & Co., which publishes the Wall Street Journal and the New York Post sued Perplexity in 2024 for copyright infringement.


CNET
an hour ago
- CNET
AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why
Chatbots can be genuinely impressive when you watch them do things they're good at, like writing realistic-sounding text or creating weird futuristic-looking images. But try to ask generative AI to solve one of those puzzles you find in the back of a newspaper, and things can quickly go off the rails. That's what researchers at the University of Colorado Boulder found when they challenged different large language models to solve Sudoku. And not even the standard 9x9 puzzles. An easier 6x6 puzzle was often beyond the capabilities of an LLM without outside help (in this case, specific puzzle-solving tools). The more important finding came when the models were asked to show their work. For the most part, they couldn't. Sometimes they lied. Sometimes they explained things in ways that made no sense. Sometimes they hallucinated and started talking about the weather. If gen AI tools can't explain their decisions accurately or transparently, that should cause us to be cautious as we give these things more and more control over our lives and decisions, said Ashutosh Trivedi, a computer science professor at the University of Colorado at Boulder and one of the authors of the paper published in July in the Findings of the Association for Computational Linguistics. "We would really like those explanations to be transparent and be reflective of why AI made that decision, and not AI trying to manipulate the human by providing an explanation that a human might like," Trivedi said. When you make a decision, you can at least try to justify it or explain how you arrived at it. That's a foundational component of society. We are held accountable for the decisions we make. An AI model may not be able to accurately or transparently explain itself. Would you trust it? Why LLMs struggle with Sudoku We've seen AI models fail at basic games and puzzles before. OpenAI's ChatGPT (among others) has been totally crushed at chess by the computer opponent in a 1979 Atari game. A recent research paper from Apple found that models can struggle with other puzzles, like the Tower of Hanoi. It has to do with the way LLMs work and fill in gaps in information. These models try to complete those gaps based on what happens in similar cases in their training data or other things they've seen in the past. With a Sudoku, the question is one of logic. The AI might try to fill each gap in order, based on what seems like a reasonable answer, but to solve it properly, it instead has to look at the entire picture and find a logical order that changes from puzzle to puzzle. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Chatbots are bad at chess for a similar reason. They find logical next moves but don't necessarily think three, four or five moves ahead. That's the fundamental skill needed to play chess well. Chatbots also sometimes tend to move chess pieces in ways that don't really follow the rules or put pieces in meaningless jeopardy. You might expect LLMs to be able to solve Sudoku because they're computers and the puzzle consists of numbers, but the puzzles themselves are not really mathematical; they're symbolic. "Sudoku is famous for being a puzzle with numbers that could be done with anything that is not numbers," said Fabio Somenzi, a professor at CU and one of the research paper's authors. I used a sample prompt from the researchers' paper and gave it to ChatGPT. The tool showed its work, and repeatedly told me it had the answer before showing a puzzle that didn't work, then going back and correcting it. It was like the bot was turning in a presentation that kept getting last-second edits: This is the final answer. No, actually, never mind, this is the final answer. It got the answer eventually, through trial and error. But trial and error isn't a practical way for a person to solve a Sudoku in the newspaper. That's way too much erasing and ruins the fun. AI and robots can be good at games if they're built to play them, but general-purpose tools like large language models can struggle with logic puzzles. Ore Huiying/Bloomberg via Getty Images AI struggles to show its work The Colorado researchers didn't just want to see if the bots could solve puzzles. They asked for explanations of how the bots worked through them. Things did not go well. Testing OpenAI's o1-preview reasoning model, the researchers saw that the explanations -- even for correctly solved puzzles -- didn't accurately explain or justify their moves and got basic terms wrong. "One thing they're good at is providing explanations that seem reasonable," said Maria Pacheco, an assistant professor of computer science at CU. "They align to humans, so they learn to speak like we like it, but whether they're faithful to what the actual steps need to be to solve the thing is where we're struggling a little bit." Sometimes, the explanations were completely irrelevant. Since the paper's work was finished, the researchers have continued to test new models released. Somenzi said that when he and Trivedi were running OpenAI's o4 reasoning model through the same tests, at one point, it seemed to give up entirely. "The next question that we asked, the answer was the weather forecast for Denver," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Explaining yourself is an important skill When you solve a puzzle, you're almost certainly able to walk someone else through your thinking. The fact that these LLMs failed so spectacularly at that basic job isn't a trivial problem. With AI companies constantly talking about "AI agents" that can take actions on your behalf, being able to explain yourself is essential. Consider the types of jobs being given to AI now, or planned for in the near future: driving, doing taxes, deciding business strategies and translating important documents. Imagine what would happen if you, a person, did one of those things and something went wrong. "When humans have to put their face in front of their decisions, they better be able to explain what led to that decision," Somenzi said. It isn't just a matter of getting a reasonable-sounding answer. It needs to be accurate. One day, an AI's explanation of itself might have to hold up in court, but how can its testimony be taken seriously if it's known to lie? You wouldn't trust a person who failed to explain themselves, and you also wouldn't trust someone you found was saying what you wanted to hear instead of the truth. "Having an explanation is very close to manipulation if it is done for the wrong reason," Trivedi said. "We have to be very careful with respect to the transparency of these explanations."