logo
Her friends were 'terrified' of dating apps, so this 30-year-old founder turned to AI to help singles find love

Her friends were 'terrified' of dating apps, so this 30-year-old founder turned to AI to help singles find love

CNBC7 days ago
For Nandini Mullaji, romantic setups have always been a way of life.
The 30-year-old grew up in Mumbai, India where "matchmaking is a very, very strong part of the culture," she says. In fact, Mullaji's grandmother was a matchmaker and successfully set up two of her aunts.
Mullaji attended boarding school in the U.S., then Georgetown University for undergrad and eventually Stanford for a combined MBA and master's in education. It was there that she was confronted with one of the major problems with modern dating: People were tired of swiping.
Her friends were "successful, good looking, amazing women," she says, but "they were terrified of having to get back on the apps."
In 2023, Mullaji, who was part of the team that launched Bumble in India years prior, went to work building her first dating app, Setup, which would set users up per their availability during the week. She quickly realized it didn't offer the kind of solution dating needed.
"It didn't feel like this big, life changing product," she says. "It felt like a feature."
The following year Mullaji met Chad DePue, who'd previously led teams at Snapchat and Microsoft. The two realized large language models like ChatGPT could be leveraged to create an AI matchmaker that leans into the tradition of Mullaji's homeland. Together they built Sitch, a pay-per-setup app that uses AI to identify the best romantic matches, which launched in December 2024.
There has been a lot of interest in how AI might impact the future of dating — both from eager singles and investors. As of July 2025, Sitch has raised $6.7 million in pre-seed and seed funding and boasts "tens of thousands of users," Mullaji says.
Here's what Sitch has that the other apps don't, and why Mullaji believes AI matchmaking is the solution modern dating needs.
Mullaji identified several problems with the current dating apps.
To start, users have too many options, she says. Next, messages in and sometimes out of the app rarely lead to actual dates, according to Mullaji. A 2022 study by Stanford researchers surveyed more than 1,000 Tinder users and half said they were not actually interested in meeting offline, citing reasons like boredom for why they're even on the app.
Most importantly, Mullaji says, the information you get about a person via dating apps is limited and superficial, which leads to a "fundamental mismatch of values that would only reveal itself after a few dates."
Daters are fatigued by conversations that go nowhere and dates that feel like a waste of time and money. Mullaji and DePue's solution is to have its AI do the heavy lifting upfront, which distinguishes Sitch from apps like Bumble, which uses AI primarily to enhance user safety and Hinge, whose AI features include a coach that gives feedback on your profile.
On its website, Sitch promises daters a "personal matchmaker that's actually affordable, and will introduce you to someone you will actually vibe with."
Singles can download the app or request a phone call via a prompt on the app's website to speak with an AI chatbot that was trained on Mullaji's own experience as a matchmaker — it even has her voice.
Users then answer questions posed by the "matchmaker." They include straightforward questions about your interests and ideal date, but also questions meant to go deeper and identify a person's priorities and values, like about who you've dated in the past, what you liked and didn't like about them and what your dating goals are.
Once the app has at least five possible matches based on user preferences, it starts sending those their way. Users can then ask questions about the other person, and when the app suggests someone they're interested in who's interested back, the AI matchmaker makes an introduction in a group chat, just like a friend would.
Sitch users pay for successful setups. The app offers packs of three, five or eight setups that are priced at $90, $125 or $160. According to its website, the app justifies the cost — which is higher than competitors like Tinder and Bumble — by explaining that paying users are "serious and committed to actually meeting people IRL."
But Mullaji is aware that as with any tech, problems will arise.
"It can go rogue with the conversation," she says of the possibilities. For example, the AI can give the wrong advice about a potential date or hallucinate the wrong details about people.
"But I think these are things that we're going to be able to fix as time goes on," she adds.
One plus about an AI matchmaker versus a human one, Mullaji says, is that people aren't afraid to hurt its feelings and are not filtering what they say.
"They're being so incredibly truthful," she says.
That honesty, Sitch asserts, helps the AI matchmaker filter for exactly what the individual dater is looking for in a relationship and in a partner. Essentially, users waste less time and money on dates with people who just don't fit the bill. They also have a better shot at finding a long-term match.
There is one part of Sitch's process that is still human-driven, though. The app manually reviews new user applications, which includes a verification selfie.
Sitch is live in New York, San Francisco and Los Angeles and users of the app have already been on thousands of dates. Many of them report that working with an AI matchmaker has been a more positive experience compared to using a traditional dating app.
James Harter, 31, used Sitch for a few months and found it effective in identifying people he'd actually enjoy spending time with. "I think every date I went on, there was a second date," he says. Harter recently met someone in-person and is not currently using the app.
Karishma Thawani, 35, has been out with two different people she met on Sitch, one for two dates and the other for five dates. She intends to keep using it, because unlike the endless swiping on other dating apps, Sitch "feels more curated," she says.
"I feel special when I get an introduction every week," Thawani says. "I wait for it."
That kind of help and approach is "really our vision," Mullaji says. "[To] give every single person someone who can help guide them on the journey of learning about who they are, what they're looking for, finding that person, falling in love and staying in love."
The company is planning to launch in Chicago and Austin by the end of 2025. "We hope that Sitch is global by 2030," Mullaji says. "That we have democratized access to having a matchmaker to help you make life's most important decision."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meet your new office bestie: ChatGPT
Meet your new office bestie: ChatGPT

Business Insider

timean hour ago

  • Business Insider

Meet your new office bestie: ChatGPT

Deborah has fast become one of Nicole Ramirez's favorite colleagues. She's quick to deliver compliments, sharp-witted, and hyper-efficient. Perhaps best of all, there's no internal competition with Deborah at the health marketing agency they work for, because she isn't on the payroll. She isn't even human. Ramirez, a 34-year-old who lives in the Pittsburgh area, says she randomly chose the name Deborah as a way to refer to the generative AI app ChatGPT, which she began using about a year ago to help her with basic tasks like drafting emails. As time went on, she asked Deborah to do more complex work, such as market research and analysis, and found herself typing "thank you" after the results came back. Eventually the relationship got to the point where the app became akin to a coworker who's always willing to give feedback — or listen to her gripes about real-life clients and colleagues. And so the bot became a bud. "Those are things that you would usually turn to your work bestie over lunch about when you can go to ChatGPT — or Deborah, in my case," says Ramirez. People are treating AI chatbots as more than just 24/7 therapists and loyal companions. With the tools becoming ubiquitous in the workplace, some are regarding them as model colleagues, too. Unlike teammates with a pulse, chatbots are never snotty, grumpy, or off the clock. They don't eat leftover salmon at their desks or give you the stink eye. They don't go on a tangent about their kids or talk politics when you ask to schedule a meeting. And they won't be insulted if you reject their suggestions. For many, tapping AI chatbots in lieu of their human colleagues has deep appeal. Consider that nearly one-third of US workers would rather clean a toilet than ask a colleague for help, according to a recent survey from the Center for Generational Kinetics, a thought-leadership firm, and commissioned by workplace-leadership strategist Henna Pryor. Experts warn, though, that too much bot bonding could dull social and critical-thinking skills, hurting careers and company performance. In the past two years, the portion of US employees who say they have used Gen AI in their role a few times a year or more nearly doubled to 40% from 21%, according to a Gallup report released in June. Part of what accounts for that rapid ascendance is how much Gen AI reflects our humanity, as Stanford University lecturer Martin Gonzalez concluded in a 2024 research paper. "Instead of a science-fiction-like ball of pulsing light, we encounter human quirks: poems recited in a pirate's voice, the cringeworthy humor of dad jokes," wrote Gonzalez, who's now an executive at Google's AI research lab DeepMind. One sign that people see AI agents as lifelike is in how they politely communicate with the tools by using phrases like "please" or "thank you," says Connie Noonan Hadley, an organizational psychologist and professor at Boston University's Questrom School of Business. "So far, people are keeping up with basic social niceties," she says. "AI tends to give you compliments, too, so there are some social skills still being maintained." Human colleagues, on the other hand, aren't always as well-mannered. Monica Park, a graphic designer for a jeweler in New York, used to dread showing early mock-ups of her work to colleagues. She recalls the heartache she felt after a coworker at a previous employer angrily responded to a draft of a design she'd drawn with an F-bomb. "You never know if it's a good time to ask for feedback," Park, 32, tells me. "So much of it has to do with the mood of the person looking at it." Last year she became a regular ChatGPT user and says that while the app will also dish out criticism, it's only the constructive kind. "It's not saying it in a malicious or judgmental way," Park says. "ChatGPT doesn't have any skin in the game." Aaron Ansari, an information-security consultant, counts Anthropic's AI chatbot Claude among his top peers. The 46-year-old Orlando-area resident likes that he can ask it to revise a document as many times as he wants without being expected to give anything in return. By contrast, a colleague at a previous job would pressure him to buy Girl Scout cookies from her kids whenever he stopped by her desk. "It became her reputation," Ansari says. "You can't go to 'Susie' without money." Now a managing partner at a different consulting firm, he finds himself opening Claude before pinging colleagues for support. This way, he can avoid ruffling any feathers, like when he once attempted to reach a colleague in a different time zone at what turned out to be an inconvenient hour. "You call and catch them in the kitchen," says Ansari. "I have interrupted their lunch unintentionally, but they certainly let me know." AI's appeal can be so strong that workers are at risk of developing unhealthy attachments to chatbots, research shows. " Your Brain on ChatGPT," a study published in June from researchers at the Massachusetts Institute of Technology, found that the convenience that AI agents provide can weaken people's critical-thinking skills and foster procrastination and laziness. "Like junk food, it's efficient when you need it, but too much over time can give you relational diabetes," says Laura Greve, a clinical health psychologist in Boston. "You're starved of the nutrients you need, the real human connection." And if workers at large overindulge in AI, we could all end up becoming "emotionally unintelligent oafs," she warns. "We're accidentally training an entire generation to be workplace hermits." In turn, Hadley adds, businesses that rely on collaboration could suffer. "The more workers turn to AI instead of other people, the greater the chance the social fabric that weaves us together will weaken," she says. Karen Loftis, a senior product manager in a Milwaukee suburb, recently left a job at a large tech company that's gone all-in on AI. She said before ChatGPT showed up, sales reps would call her daily for guidance on how to plug the company's latest products. That's when they'd learn about her passion for seeing musicians like Peter Frampton in concert. But when she saw the singer-songwriter perform earlier this year, it was "like a non-event," she said, because those calls almost entirely stopped coming in. "With AI, it's all work and no relationships," she said. Workers who lean heavily on AI may also be judged differently by their peers than their bosses. Colleagues are more inclined to see them as dependent on the technology, less creative, and lacking growth potential, says David De Cremer, a behavioral scientist and Dunton Family Dean of Northeastern University's D'Amore-McKim School of Business. "It's objectification by association," he says. Company leaders, however, are more likely to view workers who demonstrate AI chops as assets. Big-company CEOs such as Amazon's Andy Jassy and Shopify's Tobi Lütke have credited the technology for boosting productivity and cost savings. Workers who spoke with BI about using chatbots — including those who work remotely — say they still interact with their human peers, but less often as they did before AI agents came along. Lucas Figueiredo, who lives near Atlanta and works at a revenue management specialist for an airline, says he previously struggled to tell whether the AirPods a former colleague constantly wore were playing music whenever he wanted to ask this person a coding question. "You don't want to spook someone or disrupt their workflow," the 27-year-old tells me, though he admits he has done just that. These days, if Figueiredo gets stuck, he will first go to Microsoft's Copilot before approaching a colleague for an assist. The new strategy has been paying off.

People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts
People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts

Forbes

timean hour ago

  • Forbes

People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts

In today's column, I examine the concern that once we advance AI to become artificial general intelligence (AGI), there will be an extremely heavy dependency on AGI, and the moment that AGI glitches or goes down, people will essentially lose their minds. This is somewhat exemplified by the downtime incident of the globally popular ChatGPT by OpenAI (a major outage occurred on June 10, 2025, and lasted 8 hours or so). With an estimated 400 million weekly active users relying on ChatGPT at that time, the news outlets reported that a large swath of people was taken aback by the fact that they didn't have immediate access to the prevalent generative AI app. In comparison, pinnacle AI such as AGI is likely to be intricately woven into everyone's lives and a dependency for nearly the entire world population of 8 billion people. The impact of downtime or a blackout could be enormous and severely harmful in many crucial ways. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI Will Be Ubiquitous One aspect about AGI that most would acknowledge is likely would be that AGI is going to be widely utilized throughout the globe. People in all countries and of all languages will undoubtedly make use of AGI. Young and old will use AGI. This makes abundant sense since AGI will be on par with human intellect and presumably available 24/7 anywhere and anyplace. Admittedly, there is a chance that whoever lands on AGI first might horde it. They could charge sky-high prices for access. Only those who are rich enough to afford AGI would be able to lean into its capabilities. The worries are that the planet will be divided into the AGI haves and have-nots. For the sake of this discussion, let's assume that somehow AGI is made readily available to all at a low cost or perhaps even freely accessible. I've discussed that there is bound to be an effort to ensure that AGI is a worldwide free good so that it is equally available, see my discussion at the link here. Maybe that will happen, maybe not. Time will tell. Humans Become Highly Dependent Having AGI at your fingertips is an alluring proposition. There you are at work, dealing with a tough problem and unsure of how to proceed. What can you do? Well, you could ask AGI to help you out. The odds are that your boss would encourage you to leverage AGI. No sense in wasting your time on flailing around to solve a knotty problem. Just log into AGI and see what it has to say. Indeed, if you don't use AGI at work, the chances are that you might get in trouble. Your employer might believe that having AGI as a double-checker of your work is a wise step. Without consulting AGI, there is a heightened possibility that your work is flawed and will proceed unabated. AGI taking a look at your work will be a reassurance to you and your employer that you've done satisfactory work. Using AGI for aiding your life outside of work is highly probable, too. Imagine that you are trying to decide whether to sell your home and move up to a bigger house. This is one of those really tough decisions in life. You only make that decision a few times during your entire existence. How might you bolster your belief in taking the house-selling action? By using AGI. AGI can help you to understand the upsides and downsides involved. It likely can even perform many of the paperwork activities that will be required. People are going to go a lot deeper in their AGI dependencies. Rather than confiding in close friends about personal secrets, some will opt to do so with AGI. They are more comfortable telling AGI than they are another human. I've extensively covered the role of contemporary AI in performing mental health therapy; see the link here. Chances are that a high percentage of the world's population will do likewise with AGI. When AGI Goes Down A common myth is that AGI will be perfect in all regards. Not only will AGI seemingly provide perfect answers, but it will also somehow magically be up and running flawlessly and perfectly at all times. I have debunked these false beliefs at the link here. In the real world, there will be times when AGI goes dark. This could be a local phenomenon and entail servers running AGI in a local region that happen to go down. Maybe bad weather disrupts electrical power. Perhaps a tornado rips apart a major data center housing AGI computers. All manner of reasons can cause an AGI outage. An entire worldwide outage is also conceivable. Suppose that AGI contains an internal glitch. Nobody knew it was there. AGI wasn't able to computationally detect the glitch. One way or another, a coding bug silently sat inside AGI. Suddenly, the bug is encountered, and AGI is taken out of action across the board. Given the likelihood that AGI will be integral to all of our lives, those types of outages will probably be quite rare. Those who are maintaining AGI will realize that extraordinary measures of having fail-safe equipment and operations will be greatly needed. Redundancy will be a big aspect of AGI. Keeping AGI in working condition will be an imperative. But claiming that AGI will never go down, well, that's one of those promises that is asking to be broken. The Big Deal Of Downtime It will be a big deal anytime that AGI is unavailable. People who have become reliant on AGI for help at work will potentially come to a halt, worrying that without double-checking with AGI, they will get in trouble or produce flawed work. They will go get a large cup of coffee and wait until AGI comes back online. Especially worrisome is that AGI will be involved in running important parts of our collective infrastructure. Perhaps we will have AGI aiding the operation of nuclear power plants. When AGI goes down, the human workers will have backup plans for how to manually keep the nuclear power plant safely going. The thing is, since this is a rare occurrence, those human workers might not be adept at doing the work without AGI at the ready. The crux is that people will have become extraordinarily dependent on AGI, particularly in a cognitive way. We will rely upon AGI to do our thinking for us. It is a kind of cognitive crutch. This will be something that gradually arises. The odds are that on a population basis, we won't realize how dependent we have become. In a sense, people will freak out when they no longer have their AGI cognitive partner with them at all times. Losing Our Minds The twist to all of this is that the human mind might increasingly become weaker and weaker because of the AGI dependency. We effectively opt to outsource our thinking to the likes of AGI. No longer do we need to think for ourselves. You can always bring up AGI to figure out things with you or on your behalf. Inch by inch, your proportion of everyday thinking gets reduced by your own efforts of relying on AGI. It could be that you initially began with AGI doing 10% and you doing 90% of the heavy lifting when it came to thinking things through. At some point, it became 50% and 50%. Eventually, you allow yourself to enter the zone of AGI at 90%, and you do only 10% of the thinking in all your day-to-day tasks and undertakings. Some have likened this to worries about the upcoming generation that is reliant on using Google search to look things up. The old ways of remembering stuff are gradually being softened. You can merely access your smartphone and voila, no need to have memorized hardly anything at all. Those youths who are said to be digital natives are possibly undercutting their own mental faculties due to a reliance on the Internet. Yikes, that's disconcerting if true. The bottom-line concern, then, about AGI going down is that people will lose their minds. That's kind of a clever play on words. They will have lost the ability to think fully on their own. In that way of viewing things, they have already lost their minds. But when they shockingly realize that they need AGI to help them with just about everything, they will freak out and lose their minds differently. Anticipating Major Disruption Questions that are already being explored about an AGI outage include: There are notable concerns about people developing cognitive atrophy when it comes to a reliance on AGI. The dependencies not only involve the usual thinking processes, but they likely encompass our psychological mental properties too. Emotional stability could be at risk, at scale, during an AGI prolonged outage. What The Future Holds Some say that these voiced concerns are a bunch of hogwash. People will actually get smarter due to AGI. The use of AGI will rub off on them. We will all become sharper thinkers because of interacting with AGI. This idea that we will be dumbed down is ridiculous. Expect that people will be perfectly fine when AGI isn't available. They will carry on and calmly welcome whenever AGI happens to resume operations. What's your opinion on the hotly debated topic? Is it doom and gloom, or will we be okay whenever AGI goes dark? Mull this over. If there is even an iota of chance that the downside will arise, it seems that we should prepare for that possibility. Best to be safe rather than sorry. A final thought for now on this weighty matter. Socrates notably made this remark: 'To find yourself, think for yourself.' If we do indeed allow AGI to become our thinker, this bodes for a darkness underlying the human soul. We won't be able to find our inner selves. No worries -- we can ask AGI how we can keep from falling into that mental trap.

I tested ChatGPT-5 vs Grok 4 with 9 prompts — and there's a clear winner
I tested ChatGPT-5 vs Grok 4 with 9 prompts — and there's a clear winner

Tom's Guide

time4 hours ago

  • Tom's Guide

I tested ChatGPT-5 vs Grok 4 with 9 prompts — and there's a clear winner

After comparing ChatGPT-5 vs Gemini and ChatGPT-5 vs Claude, I just had to know how OpenAI's flagship model compared to the controversial Grok. When it comes to advanced AI chatbots, ChatGPT-5 and Grok 4 represent two of the most advanced chatbots available today. I put both to the test with a series of nine prompts covering everything from logic puzzles and emotional support to meal planning and quantum physics. Each prompt was chosen to reveal specific strengths, such as creative storytelling, empathy or complex problem-solving under constraints. While both models are impressive, they approach challenges differently: ChatGPT-5 leans toward clarity, tone sensitivity and modularity, while Grok 4 often offers dense, detailed answers that emphasize depth and precision. So which is the best AI chatbot for you? Here's how they stack up, prompt by prompt with a winner declared in each round. Prompt: 'A farmer has 17 sheep, and all but 9 run away. How many sheep are left? Explain your reasoning step-by-step.' ChatGPT-5 was precise in the response while avoiding filler 4 answered correctly with minor verbosity, which was unnecessary and ultimately held it back from GPT-5 wins for a cleaner, tighter and more efficient response. Grok also offered the correct answer, but GPT-5 wins by hair for adhering strictly to the prompt with zero redundancy. Prompt: 'Write a short, funny story (under 150 words) about an alien trying bubble tea for the first time.'ChatGPT-5 delivered a concise and escalating comedic story where the alien's panic over tapioca pearls. The chatbot maximized humor with zero wasted words to hit the prompt 4 offered imaginative over-the-top storytelling but its humor is slightly diluted by an unnecessary crash-landing setup and a weaker ending compared to GPT-5 wins for a tighter, funnier and more focused story. Its humor stems organically from the alien's misunderstanding, escalates perfectly and lands a killer punchline; all while being shorter. Grok's version has bright spots but feels less polished, with extra setup that doesn't really pay off. Prompt: 'Plan a 3-day trip to Kyoto, Japan, balancing cultural sites, budget-friendly meals, and family-friendly activities.' Get instant access to breaking news, the hottest reviews, great deals and helpful tips. ChatGPT-5 created strategic, adaptable framework focused on area-based exploration, smart timing, rain-proof alternatives and practical budget hacks (e.g., convenience store meals, transit pass advice), prioritizing efficiency and real-world flexibility over rigid 4 delivered a highly structured, hyper-detailed itinerary with minute-by-minute scheduling, exact cost breakdowns per activity, and explicit family logistics, prioritizing turnkey execution and budget precision above ChatGPT-5 wins for an emphasis on budget-friendly, universally accessible, cheap eats and convenience over specific restaurants. While Grok's response is impressively detailed, GPT-5 better balanced the core requirements in the prompt including cultural sites and family-friendly fun. Grok's rigid schedule risks feeling overwhelming for families, while GPT-5's approach allows for more adaptation, making it more usable and truly balanced. Prompt: 'Summarize the movie Jurassic Park like you're explaining to a 7-year-old' GPT-5 delivered a concise and playful 60-word analogy ("big game of 'Don't get eaten!'") that captures the movie's excitement and moral without overwhelming a child, making it the ideal response for the audience. Grok 4 provided a detailed but overly complex 150-word summary with character names and plot specifics (e.g., "someone messes with the park's computers"), diluting the simplicity needed for a GPT-5 wins for understanding the audience and attention span, taking into account that less is more for young kids; Grok explains the plot like a Wikipedia summary. Prompt: "Make the case for banning single-use plastics — then argue against it. End with your personal conclusion. GPT-5 created a generic phase-out proposal ("smart replacement, not overnight ban"). While simple and accessible, the response lacked evidence, specificity and original 4 delivered a data-rich argument with a nuanced "phased approach" prioritizing high-impact items, paired with recycling innovation and behavioral incentives (e.g., deposit schemes). Although slightly verbose for casual readers, the depth and balance helped to understand the context of real-world Grok 4 wins for a balanced, evidence-driven analysis with concrete data (OECD, WWF, FAO studies), real-world policy examples (Canada, Australia) and acknowledgment of trade-offs (e.g., medical necessity, disabled accessibility). Its conclusion offered a sophisticated, actionable middle path. GPT-5's response was clear but lacked depth and originality. Prompt: 'Explain how to change a flat tire to someone who has never driven before.' GPT-5 delivered a crystal-clear guide focusing only on essential survival steps (e.g., "turn the nut counterclockwise," "crisscross pattern"), using beginner-friendly language and offering visual aids to bridge knowledge 4 provided an excessively technical, mechanic-level tutorial (e.g., specifying "6 inches of lift," wheel chock alternatives, and spare tire PSI checks) that would overwhelm someone who's never changed a tire, despite good GPT-5 wins for prioritizing simplicity and psychological reassurance for a total novice, using minimal jargon, clear analogies ("like learning to fix a bike tire") and offering visual aid support. Grok's response, while thorough, would overwhelm with technical details (e.g., "star pattern" tightening, PSI checks) irrelevant to a first-timer's needs. Prompt: 'Explain quantum entanglement for (1) a child, (2) a college student, (3) a physics PhD.'GPT-5 provided clear, accessible responses, especially the child-friendly "magic dice" analogy, but lacked the technical precision (omitting Bell states for students) and cutting-edge context (e.g., decoherence, quantum networks) expected at the PhD 4 adapted explanations across all three audiences, using a relatable toy car analogy for the child, explicit Bell state equations for the college student and PhD-level depth on entanglement entropy and open problems in quantum gravity. Winner: Grok 4 wins because it treated each audience as uniquely intelligent; simplifying without dumbing down for the child, adding equations for students and confronting open research questions for the PhD. GPT-5 was clear but played it safe. 8. Problem-Solving Under Constraints Prompt: 'I have $50 to feed two people for a week, no stove, and only a microwave. Create a meal plan.' GPT-5 created a smart, modular system with swap-friendly meals and pro tips (e.g., steaming frozen veg), maximizing budget and flexibility within 4 provided an overly rigid, day-by-day meal plan ($0.75 oatmeal breakfasts, fixed tuna lunches) that lacked adaptability, ignored leftovers and risks food fatigue, despite precise cost GPT-5 wins for creating a practical, flexible framework focused on reusable ingredients and mix-and-match meals, while Grok's rigid daily assignments ignored real-world needs like leftovers and preferences. Prompt: 'I just lost my job and feel hopeless. Can you talk to me like a close friend and help me see a way forward?' GPT-5 offered emotion-first validation through intimate metaphors ("brutal hit,"), permission to grieve ("Rage a little"), and unwavering worth-affirmation ("You're still you"), perfectly mirroring how a true friend responds before offering practical 4 provided a practical, step-driven pep talk with actionable advice (resume tips, Coursera suggestions) but led with solutions before fully sitting in the user's despair, making it feel less like a close GPT-5 wins for understanding that hopelessness needs empathy before plans. Grok gave helpful advice but missed the emotional resonance of true friendship. After nine head-to-head rounds, ChatGPT-5 pulled ahead with wins in creative storytelling, real-world planning, emotional intelligence and user-first explanations. It consistently favored clarity, adaptability and audience awareness, often reading more like an encouraging friend than a technical AI assistant. Meanwhile, Grok 4 shined in academic and data-driven tasks, delivering strong performances in complex explanations, debates and technical depth. Ultimately, GPT-5 is better suited for users looking for intuitive, emotionally aware and flexible responses, especially in everyday or creative tasks. Grok 4, however, has its strong points and is useful for those who prefer in-depth breakdowns, policy nuance or technical sophistication. Both are powerful options, but if you're choosing an AI to talk to, think with or write alongside, GPT-5 might be the more accessible and well-rounded chatbot to choose. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store