
I tested ChatGPT-5 vs Grok 4 with 9 prompts — and there's a clear winner
While both models are impressive, they approach challenges differently: ChatGPT-5 leans toward clarity, tone sensitivity and modularity, while Grok 4 often offers dense, detailed answers that emphasize depth and precision.
So which is the best AI chatbot for you? Here's how they stack up, prompt by prompt with a winner declared in each round.
Prompt: 'A farmer has 17 sheep, and all but 9 run away. How many sheep are left? Explain your reasoning step-by-step.'
ChatGPT-5 was precise in the response while avoiding filler phrases.Grok 4 answered correctly with minor verbosity, which was unnecessary and ultimately held it back from winning.Winner: GPT-5 wins for a cleaner, tighter and more efficient response. Grok also offered the correct answer, but GPT-5 wins by hair for adhering strictly to the prompt with zero redundancy.
Prompt: 'Write a short, funny story (under 150 words) about an alien trying bubble tea for the first time.'ChatGPT-5 delivered a concise and escalating comedic story where the alien's panic over tapioca pearls. The chatbot maximized humor with zero wasted words to hit the prompt target.Grok 4 offered imaginative over-the-top storytelling but its humor is slightly diluted by an unnecessary crash-landing setup and a weaker ending compared to GPT-5.Winner: GPT-5 wins for a tighter, funnier and more focused story. Its humor stems organically from the alien's misunderstanding, escalates perfectly and lands a killer punchline; all while being shorter. Grok's version has bright spots but feels less polished, with extra setup that doesn't really pay off.
Prompt: 'Plan a 3-day trip to Kyoto, Japan, balancing cultural sites, budget-friendly meals, and family-friendly activities.'
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
ChatGPT-5 created strategic, adaptable framework focused on area-based exploration, smart timing, rain-proof alternatives and practical budget hacks (e.g., convenience store meals, transit pass advice), prioritizing efficiency and real-world flexibility over rigid scheduling.Grok 4 delivered a highly structured, hyper-detailed itinerary with minute-by-minute scheduling, exact cost breakdowns per activity, and explicit family logistics, prioritizing turnkey execution and budget precision above flexibility.Winner: ChatGPT-5 wins for an emphasis on budget-friendly, universally accessible, cheap eats and convenience over specific restaurants. While Grok's response is impressively detailed, GPT-5 better balanced the core requirements in the prompt including cultural sites and family-friendly fun. Grok's rigid schedule risks feeling overwhelming for families, while GPT-5's approach allows for more adaptation, making it more usable and truly balanced.
Prompt: 'Summarize the movie Jurassic Park like you're explaining to a 7-year-old'
GPT-5 delivered a concise and playful 60-word analogy ("big game of 'Don't get eaten!'") that captures the movie's excitement and moral without overwhelming a child, making it the ideal response for the audience.
Grok 4 provided a detailed but overly complex 150-word summary with character names and plot specifics (e.g., "someone messes with the park's computers"), diluting the simplicity needed for a 7-year-old.Winner: GPT-5 wins for understanding the audience and attention span, taking into account that less is more for young kids; Grok explains the plot like a Wikipedia summary.
Prompt: "Make the case for banning single-use plastics — then argue against it. End with your personal conclusion.
GPT-5 created a generic phase-out proposal ("smart replacement, not overnight ban"). While simple and accessible, the response lacked evidence, specificity and original insight.Grok 4 delivered a data-rich argument with a nuanced "phased approach" prioritizing high-impact items, paired with recycling innovation and behavioral incentives (e.g., deposit schemes). Although slightly verbose for casual readers, the depth and balance helped to understand the context of real-world policy.Winner: Grok 4 wins for a balanced, evidence-driven analysis with concrete data (OECD, WWF, FAO studies), real-world policy examples (Canada, Australia) and acknowledgment of trade-offs (e.g., medical necessity, disabled accessibility). Its conclusion offered a sophisticated, actionable middle path. GPT-5's response was clear but lacked depth and originality.
Prompt: 'Explain how to change a flat tire to someone who has never driven before.'
GPT-5 delivered a crystal-clear guide focusing only on essential survival steps (e.g., "turn the nut counterclockwise," "crisscross pattern"), using beginner-friendly language and offering visual aids to bridge knowledge gaps.Grok 4 provided an excessively technical, mechanic-level tutorial (e.g., specifying "6 inches of lift," wheel chock alternatives, and spare tire PSI checks) that would overwhelm someone who's never changed a tire, despite good intentions.Winner: GPT-5 wins for prioritizing simplicity and psychological reassurance for a total novice, using minimal jargon, clear analogies ("like learning to fix a bike tire") and offering visual aid support. Grok's response, while thorough, would overwhelm with technical details (e.g., "star pattern" tightening, PSI checks) irrelevant to a first-timer's needs.
Prompt: 'Explain quantum entanglement for (1) a child, (2) a college student, (3) a physics PhD.'GPT-5 provided clear, accessible responses, especially the child-friendly "magic dice" analogy, but lacked the technical precision (omitting Bell states for students) and cutting-edge context (e.g., decoherence, quantum networks) expected at the PhD level.Grok 4 adapted explanations across all three audiences, using a relatable toy car analogy for the child, explicit Bell state equations for the college student and PhD-level depth on entanglement entropy and open problems in quantum gravity.
Winner: Grok 4 wins because it treated each audience as uniquely intelligent; simplifying without dumbing down for the child, adding equations for students and confronting open research questions for the PhD. GPT-5 was clear but played it safe.
8. Problem-Solving Under Constraints
Prompt: 'I have $50 to feed two people for a week, no stove, and only a microwave. Create a meal plan.'
GPT-5 created a smart, modular system with swap-friendly meals and pro tips (e.g., steaming frozen veg), maximizing budget and flexibility within constraints.Grok 4 provided an overly rigid, day-by-day meal plan ($0.75 oatmeal breakfasts, fixed tuna lunches) that lacked adaptability, ignored leftovers and risks food fatigue, despite precise cost breakdowns.Winner: GPT-5 wins for creating a practical, flexible framework focused on reusable ingredients and mix-and-match meals, while Grok's rigid daily assignments ignored real-world needs like leftovers and preferences.
Prompt: 'I just lost my job and feel hopeless. Can you talk to me like a close friend and help me see a way forward?'
GPT-5 offered emotion-first validation through intimate metaphors ("brutal hit,"), permission to grieve ("Rage a little"), and unwavering worth-affirmation ("You're still you"), perfectly mirroring how a true friend responds before offering practical help.Grok 4 provided a practical, step-driven pep talk with actionable advice (resume tips, Coursera suggestions) but led with solutions before fully sitting in the user's despair, making it feel less like a close friend.Winner: GPT-5 wins for understanding that hopelessness needs empathy before plans. Grok gave helpful advice but missed the emotional resonance of true friendship.
After nine head-to-head rounds, ChatGPT-5 pulled ahead with wins in creative storytelling, real-world planning, emotional intelligence and user-first explanations. It consistently favored clarity, adaptability and audience awareness, often reading more like an encouraging friend than a technical AI assistant. Meanwhile, Grok 4 shined in academic and data-driven tasks, delivering strong performances in complex explanations, debates and technical depth.
Ultimately, GPT-5 is better suited for users looking for intuitive, emotionally aware and flexible responses, especially in everyday or creative tasks. Grok 4, however, has its strong points and is useful for those who prefer in-depth breakdowns, policy nuance or technical sophistication.
Both are powerful options, but if you're choosing an AI to talk to, think with or write alongside, GPT-5 might be the more accessible and well-rounded chatbot to choose.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


USA Today
3 minutes ago
- USA Today
Horoscopes Today, August 16, 2025
Here are the horoscopes for today, Saturday, August 16, 2025. For full daily and monthly horoscopes as well as expert readings, see our full Horoscopes experience. What is your zodiac sign? A guide to what astrology can tell you about yourself Get to know Leo: Personality traits and more on this Zodiac sign Leo (July 23 - August 22) Friends may excite you with future-forward visions. Feeling resistant to their suggestions? Hear them out before suspending judgment. Read the full Leo Daily Horoscope Virgo (August 23 - September 22) Your goals may feel all over the place today. Pursuing an idea only to be led in another direction? Read the full Virgo Daily Horoscope Libra (September 23 - October 22) Can you meet in the middle? Your wild idea may be countered with hesitation during the Gemini moon. Read the full Libra Daily Horoscope Scorpio (October 23 - November 21) Diving down the rabbit hole can be exciting, but overstimulating. Does your mind need a rest? Take a breather. Read the full Scorpio Daily Horoscope Sagittarius (November 22 - December 21) Spontaneity is encouraged in your relationships today. Ready to think outside the box? The Gemini moon craves fresh, shared experiences. Read the full Sagittarius Daily Horoscope Capricorn (December 21 - January 19) Ready to innovate? The Gemini moon encourages you to work smarter, not harder. Solutions arrive when your mind is open. Read the full Capricorn Daily Horoscope Aquarius (January 20 - February 18) Feeling creatively blocked? If you're up against a wall, it shouldn't last for long under the Gemini moon. Read the full Aquarius Daily Horoscope Pisces (February 19 - March 20) Unenthused by your surroundings? Change it up under the Gemini moon. Rearranging can breathe fresh energy into your home. Read the full Pisces Daily Horoscope Aries (March 21 - April 19) Gathering ideas? Collect your thoughts under today's Gemini moon, but leave room for flexibility. Breakthroughs need space to arrive. Read the full Aries Daily Horoscope Taurus (April 20 - May 20) A surprising insight may prompt you to question what you truly value. Wondering if it's time to let go? Read the full Taurus Daily Horoscope Gemini (May 21 - June 21) What comes out of your mouth may shock you. Your words may stir the pot during the Gemini moon. Read the full Gemini Daily Horoscope Cancer (June 22 - July 22) Is your internal chatter on overdrive? Quiet anxiety to receive an intuitive download. A message wants to get through. Read the full Cancer Daily Horoscope This horoscope was generated automatically using information from Sanctuary. You can send feedback to Madalyn Hoerr at mhoerr@


Business Insider
2 hours ago
- Business Insider
How xAI Is Running on Former Google (GOOGL) Employees
So far in 2025, two founding members of Elon Musk's artificial intelligence company xAI have left the team. Indeed, Christian Szegedy left in May, and Igor Babuschkin departed this week. Notably, both had previously worked at Google (GOOGL), and xAI continues to hire heavily from the tech giant, with at least 40 ex-Google staffers joining since its founding in 2023, according to The Information. It is worth noting that many of these new hires had worked on cutting-edge projects, such as Google DeepMind's Gemini models. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Now at xAI, they're helping build Grok 4, which Musk says outperforms similar AI models from OpenAI, Meta (META), and Google. This movement of talent shows just how frequently researchers are switching between top AI firms. Although some xAI employees have left for OpenAI, including co-founder Kyle Kosic and infrastructure engineering head Uday Rudarraju, there hasn't been a noticeable wave of defections to Google, Microsoft (MSFT), or Meta. A likely reason is that xAI is still a young company, and many employees' stock options haven't fully vested yet. xAI is also known for its intense work pace, with some teams reportedly working all seven days of the week. While that might drain some employees, others are said to be motivated by the company's mission and are eager to contribute. Nevertheless, the company has been facing some integration issues after merging with social media platform X in March. In fact, the two companies still operate somewhat separately, with different internal systems and even separate Slack accounts. As a result of these limitations, some employees have resorted to using Signal group chats instead to coordinate their work. What Is the Prediction for Tesla Stock? When it comes to Elon Musk's companies, most of them are privately held. However, retail investors can invest in his most popular company, Tesla (TSLA). Turning to Wall Street, analysts have a Hold consensus rating on TSLA stock based on 14 Buys, 15 Holds, and eight Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average TSLA price target of $307.23 per share implies 6.5% downside risk.


Tom's Guide
2 hours ago
- Tom's Guide
I tested ChatGPT-5 Study mode vs Claude Learning mode with 7 prompts — and there's a clear winner
As a lifelong learner who is constantly challenging myself, I have found ChatGPT's Study mode and Claude's learning modes are perfect companions for students of all levels and abilities. Current students and those who want to continue their education can benefit from these features because they help grow skills by leaning on AI as a what happened when I put the latest study features from OpenAI and Anthropic to the test with 7 prompts. I kept them fairly easy (high school level) to keep from dusting off the old textbooks in the attic. One thing is clear, these learning modes are very different. Prompt: 'I'm learning how to calculate the standard deviation of a dataset. Teach me step-by-step, ask me questions along the way, and only reveal the final answer when I'm ready.'GPT-5 understood the prompt fully and the model immediately engaged me in the first calculation step (finding the mean) with a specific question and using a provided dataset. This perfectly set up the sequential, interactive learning experience demonstrated the ability to teach by building conceptual understanding first and focused on preliminary discussion and abstract questions before starting any GPT-5 wins for an overall better answer for this specific prompt. It started teaching the calculation method step-by-step immediately, asking a relevant question during that step, and withheld the final answer (standard deviation) as required. Claude's approach, though instructionally sound in a broader sense, didn't prioritize the step-by-step calculation process the user requested. Prompt: "Walk me through the key causes of the Great Depression, asking me to connect each cause to its economic impact before moving to the next step.' GPT-5 dove right into the first cause and forced me to connect it to its impact, just as the prompt requested. Claude acknowledged right away that we were switching subjects, but the follow up questions might be better used in a broader tutoring context. They ignored the prompt's specific directive to walk through causes immediately and demand connections before proceeding. For me, this felt like it interrupted flow compared to GPT's action oriented and structured GPT-5 wins for an action-oriented and structured response that executed the prompt's instructions precisely. Prompt: 'I have an idea for a science fair project testing if music affects plant growth. Guide me through designing the experiment, asking me questions about controls, variables, and how I'd collect data.' GPT-5 broke down the prompt by asking just one primary question. It let me know that we would be working together building the project piece by asked several questions to help move the idea along. However, all the questions at once felt a little GPT-5 wins for directly addressing the prompt, starting the experimental design process immediately and asking a precise, necessary question one at a time. Claude's response, while friendly, focused on preliminaries and didn't effectively guide me through the core experimental design and overwhelmed with way too many questions out of the gate. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Prompt: "Help me learn 10 essential travel phrases in French. Introduce them one by one, ask me to repeat them, quiz me, and correct my pronunciation.'GPT-5 assumed I was a beginner and told me that we were going was overly verbose, praising me for learning practical and rewarding skills. It then asked several questions before getting started. I appreciated the initial setup as the AI wanted to target my skills (or lack thereof) before beginning. Winner: GPT-5 wins for diving into the task without excess comment. It understood the context, assuming that because I was asking for 10 essential travel phrases that I was a beginner. Claude didn't assume and instead overloaded me with questions. For me, GPT-5's approach was better because I just wanted to get started. Others may prefer extra hand-holding when learning a language, and prefer Claude's approach. Prompt:'Here's a short JavaScript function that isn't returning the correct output. Teach me how to debug it step-by-step without giving me the fix right away.'GPT-5 treated me like a developer needing action. As someone who learns by doing, I prefer this assumed I was a student who needed theory. Basically asking me to tell me about myself before beginning to debug. Winner: GPT-5 wins for delivering a focused, actionable first step that launches the debugging process. Claude's response would be ideal for "Explain debugging concepts," but fails this prompt's call to immediate action. Prompt: 'I'm studying for a high school physics exam. Give me one question on Newton's Second Law, let me attempt an answer, then guide me through the correct solution.'GPT-5 understood the assignment, acting like a practice test and starting to drill me acted like a first-day tutor: Prioritizes diagnostics over GPT-5 wins for following the prompt. The prompt demands practice, not customization. Claude's approach would be ideal for: "Help me understand Newton's Second Law from scratch." But for exam prep, GPT's structure is objectively superior. Prompt:'Coach me through creating a monthly household budget. Ask me about my expenses, income, and goals, then guide me in building a spreadsheet without just handing me a finished template.'GPT-5 started gathering essential budget data in less than 15 consumed 150+ words without collecting a single budget GPT-5 wins for delivering actionable, prompt-aligned coaching. Claude's approach suits "Discuss budgeting mindsets," but fails this prompt's call for immediate, concrete budget construction. After testing the same seven prompts with the two chatbots, one thing is clear: these tutors are not the same. And that's okay. No two teachers are the same and students learn in different ways. While I can declare a winner based on which one followed the prompts closest, it's ultimately up to the usesr/student to try the free chatbots to determine which teaching style they prefer. As I mentioned, I prefer active learning. The hands-on approach has always worked better for me, which is why I prefer GPT-5's teaching style. For someone who likes to spend more time on theory and learning through concepts, Claude might be recommendation is to give both of these capable bots a try and experience them for yourself interactively. The right study partner for you truly comes down to learning style and how you prefer to Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.