WhatsApp has added a new AI feature and you can't get rid of it
WhatsApp says that a new AI blue circle in its app that cannot be removed or switched off is 'optional', and users do not need to use it.
The blue circle has appeared in the iPhone and Android versions of WhatsApp, directly above the button users use to chat - and summons an AI chatbot which interacts with users.
The new feature cannot be disabled within the menus on Whatsapp, sparking the ire of some users who do not want to interact with the new bot.
Whatsapp defended the rollout of the feature, saying it was "entirely optional" for people to use. "We regularly add new features to WhatsApp - just this year we have announced Music on Status, new chat themes, new camera effects and stickers," WhatsApp told Yahoo News in a statement.
"All these features, like Meta AI are entirely optional, and people can choose to use them or not. We think giving people these options is a good thing and we're always listening to feedback from our users.'
The Meta AI service takes the form of a chatbot which users can chat with in the same way as a normal WhatsApp conversation.
It appears as a blue circle above the icon used to start chats, in the WhatsApp phone app.
Using a Large Language Model (similar to those used by ChatGPT and other services) it answers questions and 'chats' to users.
The feature is only available in some countries, and is not available to all users in every country.
Whatsapp says it can used for everything from settling a debate in a group chat to getting recipe suggestions.
As with all AI generated content, there is no guarantee what the AI chat responds with is accurate information.
The Meta AI chatbot works exactly like normal WhatsApp chats - you type a message and it responds within that chat.
You can also forward messages from other WhatsApp chats to (for example) try and learn more about something that people are discussing.
You can also tag Meta AI (by using @meta AI) within other chats to get an answer from the bot.
Meta promises the chatbot will soon be able to create images, although this feature is not currently active.
Meta says, 'Meta provides the generative AI technology that powers AI experiences available through WhatsApp. Messages from Meta AI are generated by artificial intelligence (AI), using a service from Meta, in response to the prompts and messages you send to the AI.'
Meta says that the AI can only read messages directly shared with it, and has no access to data from your other chats.
The data from chats with the chatbot is not used to 'train' AI, Meta promises.
Users can also delete chats with the AI - and 'request' the deletion of information previously shared with the AI system, although it is not clear how this works.
'When you choose to use these features, Meta receives your prompts, the messages you share, and feedback to deliver relevant responses directly to you and to improve its AI quality," Meta says. 'Only messages that mention @Meta AI, or that people choose to share with Meta AI, can be read by Meta. Meta can't read any other messages in your personal chats.
add an extra layer of protection to your chats 🔒 advanced chat privacy stops media downloads, block chat exports, and keep your messages between you and who you're talking to pic.twitter.com/8wmh2OCgQL
— WhatsApp (@WhatsApp) April 23, 2025
Meta notes that because other personal messages are protected by end-to-end encryption (meaning that only the devices the messages are sent to and from can see them), it is impossible for the AI to read other messages.
"Not even WhatsApp or Meta, can read, listen to, or share them," Meta says.
Meta claims that the feature is optional, in that users can simply opt not to use it - but the prominent button cannot be disabled or removed.
On Reddit threads, users have complain bitterly about the feature, with one saying: 'All I want to know is how to remove it.'
WhatsApp's Breckmann said in response: "You need to take an action to chat to or invoke Meta AI (it's not 'on' until you choose to use it)
"Personal messages with friends and family are end-to-end encrypted – no change there. We make any chat with AI really visually distinct so it's clear they're different to personal chats
"If you do talk to Meta AI, it doesn't link your personal WhatsApp account information on Facebook, Instagram, or any other apps provided by Meta."
"Always remember not to message Meta AI with information you don't want it to know."
But while the blue circle cannot be disabled, a new feature called Advanced Chat Privacy will allow users to stop your information being shared with Meta AI from other chats
This feature does not appear to be available in the UK as yet (or at least not on the Android device Yahoo News UK tested it with), but WhatsApp promises it will be rolled out "soon".
"Advanced Chat Privacy is a new setting available in both chats and groups that helps prevent others from taking content outside of WhatsApp for when you may want extra privacy," Meta said.
'When the setting is on, you can block others from exporting chats, auto-downloading media to their phone, and using messages for AI features. That way everyone in the chat has greater confidence that no one can take what is being said outside the chat.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
43 minutes ago
- Tom's Guide
I'm an ex-barista and I challenged ChatGPT to a latte-making contest — here's who won
Picture this: it's the final of the World Barista Championship (yes, a real thing, and no, I've never been). There's a human barista, who's spent years honing their craft, researching their passion, and inventing brand new drinks to hopefully take the title and be crowed the World's #1 Barista. Their coffee is nuanced, innovative, and clever. Then there's a robot arm churning out coffee after coffee to a precise, scientific recipe programmed into its motherboard. Its coffee is caffeinated, comfortable, and most importantly: consistent. There's nothing inherently wrong with either. When I lived in South Korea, I visited these robot-staffed cafes, because, well, I was curious. And while the coffee is drinkable, it's missing that oomph that human-made coffee does. Taste is mostly why I stay away from automatic espresso machines — I just don't think they can brew coffee as well as a human can. The best espresso machines are the ones that need a human hand. Prove me wrong. This is the espresso machine I used in this challenge. I use this machine every day, and while it's not perfect, it's an awesome all-in-one machine that stops coffee getting everywhere thanks to the internal tamper. At the weekend, I walked past one of my favorite cafes in my city. As the smell of freshly-ground specialty coffee permeated the air and meandered into my nose, I had to stop myself from going in and ordering my old faithful: an oat milk flat white. You have oat milk flat white at home, I told myself. So I moseyed back to my house and got brewing. Then I remembered those semi-cute-semi-scary robot arms in Korea, and the idea hit me: What if I ask ChatGPT how to make the best coffee? Although I love flat whites, I understand lattes to be a more common beverage, so I decided to pit two lattes head-to-head. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. I opened up ChatGPT on my laptop and prompted: "How do I make a perfect latte?" This is what ChatGPT said (note: scroll to see both images). If you know anything about espresso, ChatGPT's first mistake will be glaringly obvious. While it recommends 18g-20g of coffee (correct), it goes on to say a double shot is 2oz of espresso in 25-30 seconds. 2oz is around 56g, which is almost double the size of a double shot. This is more like a lungo shot, which I wouldn't recommend for a latte — it'll weaken the flavor. Generally, the optimum ratio is 1:2, which means I'd get 36g of espresso out of 18g coffee. And then to recommend I get 56g of espresso in 25-30 seconds? That means my coffee would need to be ground way too coarse to be espresso-fine. I was skeptical, but I committed to the bit and followed ChatGPT's advice exactly. The next issue was ChatGPT telling me to "let[ting] air in [the milk] for 1-2 seconds". This is simply not true. I knew just from reading these instructions that my milk would be way too wet. I also don't like medium or dark roasted coffee (much prefer a juicy light-roasted African bean), but I persevered for science. Still, I committed to the bit and followed the instructions exactly. Here's what happened in the battle of ex-barista vs. AI. Spoiler alert: I won. It resulted in a bitter, over-extracted taste. I was not impressed. I followed ChatGPT's instructions exactly. Using my Wacaco Exagram Pro scale (one of the best coffee scales, IMO) I weighed out 18g of finely-ground coffee (notch 3 on my De'Longhi La Specialista Opera). I tamped with the Opera's smart tamper and slotted the portafilter into the group head. I placed the scale underneath my latte glass and started extraction. By 25 seconds, I only had 31g of coffee. Still, I waited, adamant on following ChatGPT's guidance. I stopped extraction when the scale read 56g, which took a massive 40 seconds. This extraction time is about 15 seconds longer than I'd usually pull a shot for, and resulted in a bitter, over-extracted taste. I was not impressed. Now, the milk. ChatGPT told me to aerate for just 1-2 seconds, then plunge the steam wand into the milk until it reached around 140°F, so that's what I did. Reader, it was not good. I know from experience that you should probably aerate milk (gently) for around 5 seconds, or until it feels right. A lot of coffeemaking relies on feelings and vibes, which obviously an AI can't relate to. Because the milk wasn't aerated enough, I wasn't able to get latte art. Here's a photo of the ChatGPT latte. Now I'll walk you through my process. Using the same Wacaco Exagram Pro scale, I weighed out 18g of the exact same dark roast coffee in my portafilter. I tamped with the same tamper and slotted the group handle into the machine. After placing the latte glass and scale under the portafilter, I started extraction. I timed the shot with my scale. When the espresso reached 36g, at 26 seconds, I stopped extraction. Then I poured just under 8oz of the exact same whole milk and steamed it my way. For this, I made the little tss-tss-tss sound for about 5 seconds and created a whirlpool in the center of the jug. I moved the jug up and down to ensure my milk was steamed evenly and had a nice layer of microfoam. Here's a photo of my latte. No, the latte art isn't my best work. You'd have to check out my other reviews (like the De'Longhi Stilosa) to see some really nice art. However, in terms of both looks and taste, my latte was much more pleasant. The coffee was sweet and balanced with a little touch of bitterness, whereas the ChatGPT latte was overwhelmingly bitter thanks to the lungo shot. My latte was also more pleasing to drink thanks to the textured microfoam. Me. Duh. I joke — I actually wanted ChatGPT to win, to prove me wrong. I wanted to rise to the challenge. But there wasn't really much of a challenge. I was simply left disappointed by how misguided ChatGPT's recipe was. After crowning myself the winner, I thought maybe I shouldn't have been the only judge. Of course I'd give myself the title. Thankfully, I have a completely unbiased test subject in the form of my boyfriend (sarcasm intended). I gave him both lattes and asked him which was better. "This one looks better," he said (direct quote), while pointing to mine. He took a sip and said, "It tastes good." After I asked him which once tasted better, he shrugged and offered, "They both tasted good." Hm. He's the kind of person who wants hot bean juice and wants it now, whereas I'm the kind of person who'd rather drink no coffee than bad coffee. So maybe I am biased. Maybe ChatGPT's recipe isn't so bad after all. But I think for other semi-coffee-snobs like me, human touch can never be replicated.


Scientific American
an hour ago
- Scientific American
Truly Intelligent AI Could Play by the Rules, No Matter How Strange
Tic-tac-toe is about as simple as games get—but as Scientific American 's legendary contributor Martin Gardner pointed out almost 70 years ago, it has complex variations and strategic aspects. They range from 'reverse' games—where the first player to make three in a row loses—to three-dimensional versions played on cubes and beyond. Gardner's games, even if they boggle a typical human mind, might point us to a way to make artificial intelligence more humanlike. That's because games in their endless variety—with rules that must be imagined, understood and followed—are part of what makes us human. Navigating rules is also a key challenge for AI models as they start to approximate human thought. And as things stand, it's a challenge where most of these models fall short. That's a big deal because if there's a path to artificial general intelligence, the ultimate goal of machine-learning and AI research, it can only come through building AIs that are capable of interpreting, adapting to and rigidly following the rules we set for them. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. To drive the development of such AI, we must develop a new test—let's call it the Gardner test—in which an AI is surprised with the rules of a game and is then expected to play by those rules without human intervention. One simple way to achieve the surprise is to disclose the rules only when the game begins. The Gardner test, with apologies to the Turing test, is inspired by and builds on the pioneering work in AI on general game playing (GGP), a field largely shaped by Stanford University professor Michael Genesereth. In GGP competitions, AIs running on standard laptops face off against other AIs in games whose rules—written in a formal mathematical language —are revealed only at the start. The test proposed here advances a new frontier: accepting game rules expressed in a natural language such as English. Once a distant goal, this is now within reach of modern AIs because of the recent breakthroughs in large language models (LLMs) such as those that power ChatGPT and that fall within the families of Claude and Llama. The proposed challenge should include a battery of tests that could be initially focused on games that have been staples of GGP competitions such as Connect Four, Hex and Pentago. It should also leverage an impressive array of games that Gardner wrote about. Test design could benefit from the involvement of the vibrant international GGP research community, developers of frontier AI models and, of course, diehard Martin Gardner fans. But to pass the new test, it isn't enough to create an AI system that's good at playing one specific predetermined game or even many. Instead, an AI must be designed to master any strategy game on the fly. Strategy games require humanlike ability to think across and beyond multiple steps, deal with unpredictable responses, adapt to changing objectives and still conform to a strict rule set. That's a big leap from today's top game-playing AI models, which rely on knowing the rules in advance to train their algorithms. Consider, for instance, AlphaZero, the revolutionary AI model that's capable of playing three games—chess, Go and shogi (Japanese chess)—at a superhuman level. AlphaZero learns through a technique known as 'self-play'—it repeatedly plays against a copy of itself, and from that experience, it gets better over time. Self-play, however, requires the rules of each game to be set before training. AlphaZero's ability to master complex games is undoubtedly impressive, but it's a brittle system: if you present AlphaZero with a game different than the ones it's learned, it will be completely flummoxed. In contrast, an AI model performing well on the proposed new test would be capable of adapting to new rules, even in the absence of data; it would play any game and follow any novel rule set with power and precision. That last point—precision—is an important one. You can prompt many generative AI systems to execute variants on simple games, and they'll play along: ChatGPT can play a 4×4 or 5×5 variant of tic-tac-toe, for instance. But an LLM prompt is best thought of as a suggestion rather than a concrete set of rules—that's why we often have to coax, wheedle and prompt tune LLMs into doing exactly what we want. A general intelligence that would pass the Gardner test, by contrast, would by definition be able to follow the rules perfectly: not following a rule exactly would mean failing the test. Specialized tools that operate without truly understanding the rules tend to color outside the lines, reproducing past errors from training data rather than adhering to the rules we set. It's easy to imagine real-world scenarios in which such errors could be catastrophic: in a national security context, for instance, AI capabilities are needed that can accurately apply rules of engagement dynamically or negotiate subtle but crucial differences in legal and command authorities. In finance, programmable money is emerging as a new form of currency that can obey rules of ownership and transferability—and misapplying these rules could lead to financial disaster. Ironically, building AI systems that can follow rules rigorously would ultimately make it possible to create machine intelligences that are far more humanlike in their flexibility and ability to adapt to uncertain and novel situations. When we think of human game players, we tend to think of specialists: Magnus Carlsen is a great chess player but might not be so hot at Texas Hold'Em. The point, though, is that humans are capable of generalizing; if Carlsen ever gave up chess, he could be a decent contender for the Pentamind World Championship, which celebrates the best all-round games player. Game playing with a novel set of rules is crucial to the next evolution of AI because it will potentially let us create AIs that will be capable of anything—but that will also meticulously and reliably follow the rules we set for them. If we want powerful but safe AI, testing its ability in playing games on the fly might be the best path forward.


Bloomberg
an hour ago
- Bloomberg
Meta Gets Out Its Checkbook to Catch Up in the AI Race
It sounded like something that should have come from the sports desk — a $14.3 billion transfer fee for a young up-and-coming prospect as Meta Platforms Inc. looks to rebuild its team for the tough season ahead. The head coach is an under-pressure Mark Zuckerberg, and the hot talent is Alexandr Wang, 28. His company is Scale AI, and Meta is taking a 49% stake, it was confirmed last week. Were this an acquisition, it would be the second largest in Meta's history after its $19 billion purchase of WhatsApp in 2014. But it's not an acquisition, so don't call it that, even though it bears many of the hallmarks of one.