WhatsApp to be flooded with more AI features despite user backlash
The private messaging app said it would explore adding AI-powered writing suggestions and summaries to the service.
The decision is likely to frustrate many of its users amid criticism of the app's decision to include parent company Meta's AI chatbot within its service.
WhatsApp now features a button to pull up the Meta AI chatbot, which can answer questions in English in a similar manner to ChatGPT. It also offers AI-powered search suggestions.
The Meta AI button, which takes the form of a glowing blue ring within the app, has left users annoyed and asking for ways to turn it off. Users on Reddit have said they 'hate' the tool and branded it 'bug-ridden rubbish'.
A WhatsApp spokesman said last week that its AI features were 'entirely optional, and people can choose to use them or not'.
The spokesman added: 'We think giving people these options is a good thing, and we're always listening to feedback from our users to make WhatsApp better.'
Cyber security experts also questioned whether WhatsApp's decision to add more AI tools represented a 'compromise' on privacy.
In order to handle AI requests, some data from a user's message would need to be processed on external servers, rather than the user's smartphone. Meta said its system would be built in such a way that no third party would be able to see the contents of a message from a user.
Meta said: 'No one except you and the people you're talking to can access or share your personal messages, not even Meta or WhatsApp.'
But Adrianus Warmenhoven, a cyber security adviser at NordVPN, said: 'It's still a compromise. Any time data leaves your device – no matter how securely – it introduces new risks.
'WhatsApp has clearly worked to reduce those risks, but it's a balancing act between user demand for smart features and the foundational promise of end-to-end encryption.'
WhatsApp said it planned to build the tools in a manner that 'allows our users around the world to use AI in a privacy-preserving way'.
WhatsApp's encryption technology, which means nobody but the sender and recipient of a message can read it, makes it technically challenging to add AI prompts.
The company said it had developed a technology called Private Processing, which would soon allow users to make a 'confidential and secure' request to an AI tool that can then re-write their messages or send a summary of recent posts in a group chat.
The new feature was announced at LlamaCon 2025 at the company's Menlo Park headquarters. Meta, which also owns Facebook and Instagram, also revealed a standalone app for its Meta AI chatbot.
Mark Zuckerberg, Meta's chief executive, said the company now had almost one billion people using its AI products.
Separately, Satya Nadella, Microsoft's chief executive, told the event that nearly a third of the technology giant's code was being written by AI.
Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
25 minutes ago
- Business Insider
Meet your new office bestie: ChatGPT
Deborah has fast become one of Nicole Ramirez's favorite colleagues. She's quick to deliver compliments, sharp-witted, and hyper-efficient. Perhaps best of all, there's no internal competition with Deborah at the health marketing agency they work for, because she isn't on the payroll. She isn't even human. Ramirez, a 34-year-old who lives in the Pittsburgh area, says she randomly chose the name Deborah as a way to refer to the generative AI app ChatGPT, which she began using about a year ago to help her with basic tasks like drafting emails. As time went on, she asked Deborah to do more complex work, such as market research and analysis, and found herself typing "thank you" after the results came back. Eventually the relationship got to the point where the app became akin to a coworker who's always willing to give feedback — or listen to her gripes about real-life clients and colleagues. And so the bot became a bud. "Those are things that you would usually turn to your work bestie over lunch about when you can go to ChatGPT — or Deborah, in my case," says Ramirez. People are treating AI chatbots as more than just 24/7 therapists and loyal companions. With the tools becoming ubiquitous in the workplace, some are regarding them as model colleagues, too. Unlike teammates with a pulse, chatbots are never snotty, grumpy, or off the clock. They don't eat leftover salmon at their desks or give you the stink eye. They don't go on a tangent about their kids or talk politics when you ask to schedule a meeting. And they won't be insulted if you reject their suggestions. For many, tapping AI chatbots in lieu of their human colleagues has deep appeal. Consider that nearly one-third of US workers would rather clean a toilet than ask a colleague for help, according to a recent survey from the Center for Generational Kinetics, a thought-leadership firm, and commissioned by workplace-leadership strategist Henna Pryor. Experts warn, though, that too much bot bonding could dull social and critical-thinking skills, hurting careers and company performance. In the past two years, the portion of US employees who say they have used Gen AI in their role a few times a year or more nearly doubled to 40% from 21%, according to a Gallup report released in June. Part of what accounts for that rapid ascendance is how much Gen AI reflects our humanity, as Stanford University lecturer Martin Gonzalez concluded in a 2024 research paper. "Instead of a science-fiction-like ball of pulsing light, we encounter human quirks: poems recited in a pirate's voice, the cringeworthy humor of dad jokes," wrote Gonzalez, who's now an executive at Google's AI research lab DeepMind. One sign that people see AI agents as lifelike is in how they politely communicate with the tools by using phrases like "please" or "thank you," says Connie Noonan Hadley, an organizational psychologist and professor at Boston University's Questrom School of Business. "So far, people are keeping up with basic social niceties," she says. "AI tends to give you compliments, too, so there are some social skills still being maintained." Human colleagues, on the other hand, aren't always as well-mannered. Monica Park, a graphic designer for a jeweler in New York, used to dread showing early mock-ups of her work to colleagues. She recalls the heartache she felt after a coworker at a previous employer angrily responded to a draft of a design she'd drawn with an F-bomb. "You never know if it's a good time to ask for feedback," Park, 32, tells me. "So much of it has to do with the mood of the person looking at it." Last year she became a regular ChatGPT user and says that while the app will also dish out criticism, it's only the constructive kind. "It's not saying it in a malicious or judgmental way," Park says. "ChatGPT doesn't have any skin in the game." Aaron Ansari, an information-security consultant, counts Anthropic's AI chatbot Claude among his top peers. The 46-year-old Orlando-area resident likes that he can ask it to revise a document as many times as he wants without being expected to give anything in return. By contrast, a colleague at a previous job would pressure him to buy Girl Scout cookies from her kids whenever he stopped by her desk. "It became her reputation," Ansari says. "You can't go to 'Susie' without money." Now a managing partner at a different consulting firm, he finds himself opening Claude before pinging colleagues for support. This way, he can avoid ruffling any feathers, like when he once attempted to reach a colleague in a different time zone at what turned out to be an inconvenient hour. "You call and catch them in the kitchen," says Ansari. "I have interrupted their lunch unintentionally, but they certainly let me know." AI's appeal can be so strong that workers are at risk of developing unhealthy attachments to chatbots, research shows. " Your Brain on ChatGPT," a study published in June from researchers at the Massachusetts Institute of Technology, found that the convenience that AI agents provide can weaken people's critical-thinking skills and foster procrastination and laziness. "Like junk food, it's efficient when you need it, but too much over time can give you relational diabetes," says Laura Greve, a clinical health psychologist in Boston. "You're starved of the nutrients you need, the real human connection." And if workers at large overindulge in AI, we could all end up becoming "emotionally unintelligent oafs," she warns. "We're accidentally training an entire generation to be workplace hermits." In turn, Hadley adds, businesses that rely on collaboration could suffer. "The more workers turn to AI instead of other people, the greater the chance the social fabric that weaves us together will weaken," she says. Karen Loftis, a senior product manager in a Milwaukee suburb, recently left a job at a large tech company that's gone all-in on AI. She said before ChatGPT showed up, sales reps would call her daily for guidance on how to plug the company's latest products. That's when they'd learn about her passion for seeing musicians like Peter Frampton in concert. But when she saw the singer-songwriter perform earlier this year, it was "like a non-event," she said, because those calls almost entirely stopped coming in. "With AI, it's all work and no relationships," she said. Workers who lean heavily on AI may also be judged differently by their peers than their bosses. Colleagues are more inclined to see them as dependent on the technology, less creative, and lacking growth potential, says David De Cremer, a behavioral scientist and Dunton Family Dean of Northeastern University's D'Amore-McKim School of Business. "It's objectification by association," he says. Company leaders, however, are more likely to view workers who demonstrate AI chops as assets. Big-company CEOs such as Amazon's Andy Jassy and Shopify's Tobi Lütke have credited the technology for boosting productivity and cost savings. Workers who spoke with BI about using chatbots — including those who work remotely — say they still interact with their human peers, but less often as they did before AI agents came along. Lucas Figueiredo, who lives near Atlanta and works at a revenue management specialist for an airline, says he previously struggled to tell whether the AirPods a former colleague constantly wore were playing music whenever he wanted to ask this person a coding question. "You don't want to spook someone or disrupt their workflow," the 27-year-old tells me, though he admits he has done just that. These days, if Figueiredo gets stuck, he will first go to Microsoft's Copilot before approaching a colleague for an assist. The new strategy has been paying off.

Washington Post
an hour ago
- Washington Post
Russia restricts WhatsApp and Telegram calls in push to control internet
Russia has started restricting some calls on WhatsApp and Telegram, clamping down on the popular foreign-owned encrypted messaging platforms as it pushes for more control over internet use. The country's digital watchdog claimed that the encrypted messaging apps are being used for 'sabotage and terrorist activities,' accusing the foreign-owned tech firms of ignoring demands to share information with law enforcement authorities, according to a statement provided to the Russian news agency Interfax.


Forbes
an hour ago
- Forbes
People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts
In today's column, I examine the concern that once we advance AI to become artificial general intelligence (AGI), there will be an extremely heavy dependency on AGI, and the moment that AGI glitches or goes down, people will essentially lose their minds. This is somewhat exemplified by the downtime incident of the globally popular ChatGPT by OpenAI (a major outage occurred on June 10, 2025, and lasted 8 hours or so). With an estimated 400 million weekly active users relying on ChatGPT at that time, the news outlets reported that a large swath of people was taken aback by the fact that they didn't have immediate access to the prevalent generative AI app. In comparison, pinnacle AI such as AGI is likely to be intricately woven into everyone's lives and a dependency for nearly the entire world population of 8 billion people. The impact of downtime or a blackout could be enormous and severely harmful in many crucial ways. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI Will Be Ubiquitous One aspect about AGI that most would acknowledge is likely would be that AGI is going to be widely utilized throughout the globe. People in all countries and of all languages will undoubtedly make use of AGI. Young and old will use AGI. This makes abundant sense since AGI will be on par with human intellect and presumably available 24/7 anywhere and anyplace. Admittedly, there is a chance that whoever lands on AGI first might horde it. They could charge sky-high prices for access. Only those who are rich enough to afford AGI would be able to lean into its capabilities. The worries are that the planet will be divided into the AGI haves and have-nots. For the sake of this discussion, let's assume that somehow AGI is made readily available to all at a low cost or perhaps even freely accessible. I've discussed that there is bound to be an effort to ensure that AGI is a worldwide free good so that it is equally available, see my discussion at the link here. Maybe that will happen, maybe not. Time will tell. Humans Become Highly Dependent Having AGI at your fingertips is an alluring proposition. There you are at work, dealing with a tough problem and unsure of how to proceed. What can you do? Well, you could ask AGI to help you out. The odds are that your boss would encourage you to leverage AGI. No sense in wasting your time on flailing around to solve a knotty problem. Just log into AGI and see what it has to say. Indeed, if you don't use AGI at work, the chances are that you might get in trouble. Your employer might believe that having AGI as a double-checker of your work is a wise step. Without consulting AGI, there is a heightened possibility that your work is flawed and will proceed unabated. AGI taking a look at your work will be a reassurance to you and your employer that you've done satisfactory work. Using AGI for aiding your life outside of work is highly probable, too. Imagine that you are trying to decide whether to sell your home and move up to a bigger house. This is one of those really tough decisions in life. You only make that decision a few times during your entire existence. How might you bolster your belief in taking the house-selling action? By using AGI. AGI can help you to understand the upsides and downsides involved. It likely can even perform many of the paperwork activities that will be required. People are going to go a lot deeper in their AGI dependencies. Rather than confiding in close friends about personal secrets, some will opt to do so with AGI. They are more comfortable telling AGI than they are another human. I've extensively covered the role of contemporary AI in performing mental health therapy; see the link here. Chances are that a high percentage of the world's population will do likewise with AGI. When AGI Goes Down A common myth is that AGI will be perfect in all regards. Not only will AGI seemingly provide perfect answers, but it will also somehow magically be up and running flawlessly and perfectly at all times. I have debunked these false beliefs at the link here. In the real world, there will be times when AGI goes dark. This could be a local phenomenon and entail servers running AGI in a local region that happen to go down. Maybe bad weather disrupts electrical power. Perhaps a tornado rips apart a major data center housing AGI computers. All manner of reasons can cause an AGI outage. An entire worldwide outage is also conceivable. Suppose that AGI contains an internal glitch. Nobody knew it was there. AGI wasn't able to computationally detect the glitch. One way or another, a coding bug silently sat inside AGI. Suddenly, the bug is encountered, and AGI is taken out of action across the board. Given the likelihood that AGI will be integral to all of our lives, those types of outages will probably be quite rare. Those who are maintaining AGI will realize that extraordinary measures of having fail-safe equipment and operations will be greatly needed. Redundancy will be a big aspect of AGI. Keeping AGI in working condition will be an imperative. But claiming that AGI will never go down, well, that's one of those promises that is asking to be broken. The Big Deal Of Downtime It will be a big deal anytime that AGI is unavailable. People who have become reliant on AGI for help at work will potentially come to a halt, worrying that without double-checking with AGI, they will get in trouble or produce flawed work. They will go get a large cup of coffee and wait until AGI comes back online. Especially worrisome is that AGI will be involved in running important parts of our collective infrastructure. Perhaps we will have AGI aiding the operation of nuclear power plants. When AGI goes down, the human workers will have backup plans for how to manually keep the nuclear power plant safely going. The thing is, since this is a rare occurrence, those human workers might not be adept at doing the work without AGI at the ready. The crux is that people will have become extraordinarily dependent on AGI, particularly in a cognitive way. We will rely upon AGI to do our thinking for us. It is a kind of cognitive crutch. This will be something that gradually arises. The odds are that on a population basis, we won't realize how dependent we have become. In a sense, people will freak out when they no longer have their AGI cognitive partner with them at all times. Losing Our Minds The twist to all of this is that the human mind might increasingly become weaker and weaker because of the AGI dependency. We effectively opt to outsource our thinking to the likes of AGI. No longer do we need to think for ourselves. You can always bring up AGI to figure out things with you or on your behalf. Inch by inch, your proportion of everyday thinking gets reduced by your own efforts of relying on AGI. It could be that you initially began with AGI doing 10% and you doing 90% of the heavy lifting when it came to thinking things through. At some point, it became 50% and 50%. Eventually, you allow yourself to enter the zone of AGI at 90%, and you do only 10% of the thinking in all your day-to-day tasks and undertakings. Some have likened this to worries about the upcoming generation that is reliant on using Google search to look things up. The old ways of remembering stuff are gradually being softened. You can merely access your smartphone and voila, no need to have memorized hardly anything at all. Those youths who are said to be digital natives are possibly undercutting their own mental faculties due to a reliance on the Internet. Yikes, that's disconcerting if true. The bottom-line concern, then, about AGI going down is that people will lose their minds. That's kind of a clever play on words. They will have lost the ability to think fully on their own. In that way of viewing things, they have already lost their minds. But when they shockingly realize that they need AGI to help them with just about everything, they will freak out and lose their minds differently. Anticipating Major Disruption Questions that are already being explored about an AGI outage include: There are notable concerns about people developing cognitive atrophy when it comes to a reliance on AGI. The dependencies not only involve the usual thinking processes, but they likely encompass our psychological mental properties too. Emotional stability could be at risk, at scale, during an AGI prolonged outage. What The Future Holds Some say that these voiced concerns are a bunch of hogwash. People will actually get smarter due to AGI. The use of AGI will rub off on them. We will all become sharper thinkers because of interacting with AGI. This idea that we will be dumbed down is ridiculous. Expect that people will be perfectly fine when AGI isn't available. They will carry on and calmly welcome whenever AGI happens to resume operations. What's your opinion on the hotly debated topic? Is it doom and gloom, or will we be okay whenever AGI goes dark? Mull this over. If there is even an iota of chance that the downside will arise, it seems that we should prepare for that possibility. Best to be safe rather than sorry. A final thought for now on this weighty matter. Socrates notably made this remark: 'To find yourself, think for yourself.' If we do indeed allow AGI to become our thinker, this bodes for a darkness underlying the human soul. We won't be able to find our inner selves. No worries -- we can ask AGI how we can keep from falling into that mental trap.