logo
WhatsApp defends 'optional' AI tool that cannot be turned off

WhatsApp defends 'optional' AI tool that cannot be turned off

BBC News23-04-2025

WhatsApp says its new AI feature embedded in the messaging service is "entirely optional" - despite the fact it cannot be removed from the app.The Meta AI logo is an ever-present blue circle with pink and green splashes in the bottom right of your Chats screen.Interacting with it opens a chatbot designed to answer your questions, but it has drawn attention and frustration from users who cannot turn it off. It follows Microsoft's Recall feature, which was also an always-on tool - before the firm faced a backlash and decided to allow people to disable it."We think giving people these options is a good thing and we're always listening to feedback from our users," WhatsApp told the BBC.
It comes the same week Meta announced an update to its teen accounts feature on Instagram.The firm revealed it was testing AI technology in the US designed to find accounts belonging to teenagers who have lied about their age on the platform.
Where is the new blue circle?
If you can't see it, you may not be able to use it yet.Meta says the feature is only being rolled out to some countries at the moment and advises it "might not be available to you yet, even if other users in your country have access". As well as the blue circle, there is a search bar at the top inviting users to 'Ask Meta AI or Search'. This is also a feature on Facebook Messenger and Instagram, with both platforms owned by Meta.Its AI chatbot is powered by Llama 4, one of the large language models operated by Meta.Before you ask it anything, there is a long message from Meta explaining what Meta AI is - stating it is "optional".On its website, WhatsApp says Meta AI "can answer your questions, teach you something, or help come up with new ideas".I tried out the feature by asking the AI what the weather was like in Glasgow, and it responded in seconds with a detailed report on temperature, the chance of rain, wind and humidity.It also gave me two links for further information, but this is where it ran into problems.One of the links was relevant, but the other tried to give me additional weather details for Charing Cross - not the location in Glasgow, but the railway station in London.
What do people think of it?
So far in Europe people aren't very pleased, with users on X, Bluesky, and Reddit outlining their frustrations - and Guardian columnist Polly Hudson was among those venting their anger at not being able to turn it off.Dr Kris Shrishak, an adviser on AI and privacy, was also highly critical, and accused Meta of "exploiting its existing market" and "using people as test subjects for AI"."No one should be forced to use AI," he told the BBC. "Its AI models are a privacy violation by design - Meta, through web scraping, has used personal data of people and pirated books in training them."Now that the legality of their approach has been challenged in courts, Meta is looking for other sources to collect data from people, and this feature could be one such source."An investigation by The Atlantic revealed Meta may have accessed millions of pirated books and research papers through LibGen - Library Genesis - to train its Llama AI.Author groups across the UK and around the world are organising campaigns to encourage governments to intervene, and Meta is currently defending a court case brought by multiple authors over the use of their work.A spokesperson for Meta declined to comment on The Atlantic investigation.
What are the concerns?
When you first use Meta AI in WhatsApp, it states the chatbot "can only read messages people share with it"."Meta can't read any other messages in your personal chats, as your personal messages remain end to end encrypted," it says.Meanwhile the Information Commissioner's Office told the BBC it would "continue to monitor the adoption of Meta AI's technology and use of personal data within WhatsApp"."Personal information fuels much of AI innovation so people need to trust that organisations are using their information responsibly," it said."Organisations who want to use people's personal details to train or use generative AI models need to comply with all their data protection obligations, and take the necessary extra steps when it comes to processing the data of children."And Dr Shrishak says users should be wary. "When you send messages to your friend, end to end encryption will not be affected," he said."Every time you use this feature and communicate with Meta AI, you need to remember that one of the ends is Meta, not your friend."The tech giant also highlights users should only share what they are happy with being used by others."Don't share information, including sensitive topics, about others or yourself that you don't want the AI to retain and use," it says.
Additional reporting by Joe Tidy

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Minister accused of cosying up to Big Tech admits that Artificial Intelligence DOES lie
Minister accused of cosying up to Big Tech admits that Artificial Intelligence DOES lie

Daily Mail​

time3 hours ago

  • Daily Mail​

Minister accused of cosying up to Big Tech admits that Artificial Intelligence DOES lie

The Technology Secretary yesterday admitted that AI is 'not flawless' – but defended snubbing attempts to beef-up copyright protections. Peter Kyle acknowledged the fast-emerging technology 'does lie' as he insisted the Government would 'never sell downstream' the rights of artists in the UK. He also admitted 'mistakenly' saying his preferred option on AI and copyright was requiring rights-holders to 'opt out' of their material being used by Big Tech. Sir Elton John last week described the situation as an 'existential issue'. He has also branded the Technology Secretary 'a moron'. Mr Kyle has been accused of cosying up to Big Tech chiefs, meeting with Apple, Google, Amazon and Meta – Facebook 's owner – ten times in little more than three months. The Government is locked in a standoff with the House of Lords, which demands artists be offered immediate copyright protection as an amendment to the Data (Use and Access) Bill. Without this, the new law would hand a copyright exception to firms developing AI. Critics warn the Government's proposed 'opt out' system would allow the current 'Wild West' set-up, in which copyrighted material can be 'scraped' from the internet to 'train' AI models, to continue. Speaking to Sky News, Mr Kyle said: 'I mistakenly said [the opt-out clause] was my preferred option... I've now gone back to the drawing board on that.' When asked about the risk of AI producing unreliable content, Mr Kyle said: 'AI is not flawless... AI does lie, as it's based on human characteristics.' The Government has said it will address copyright issues after the 11,500 responses to its consultation on AI's impact have been reviewed, rather than in what it has branded 'piecemeal' legislation such as the Lords' amendment. UK film industry jobs at risk in tech revolution By Daily Mail Reporter The use of AI in the UK screen sector risks jobs, copyright breaches and creative integrity, a report has warned. The British Film Institute report, which analysed how the sector is using the technology, warned the industry must safeguard human creative control, with job losses likely as roles are replaced by AI. It warned that the 'primary issue' is the use of copyrighted material – such as film and TV scripts – in the training of AI models, without payment or the permission of rights-holders. The issue has been highlighted by the Mail's 'Don't let Big Tech steal it ' campaign, which calls for the Government to protect the UK's creative genius. Rishi Coupland, the BFI's director of research and innovation, said: 'AI could erode traditional business models, displace skilled workers and undermine trust in screen content.'

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks
Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

The Guardian

time10 hours ago

  • The Guardian

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

Internet safety campaigners have urged the UK's communications watchdog to limit the use of artificial intelligence in crucial risk assessments following a report that Mark Zuckerberg's Meta was planning to automate checks. Ofcom said it was 'considering the concerns' raised by the letter following a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. Social media platforms are required under the UK's Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. In a letter to Ofcom's chief executive, Dame Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a 'retrograde and highly alarming step'. 'We urge you to publicly assert that risk assessments will not normally be considered as 'suitable and sufficient', the standard required by … the Act, where these have been wholly or predominantly produced through automation.' The letter also urged the watchdog to 'challenge any assumption that platforms can choose to water down their risk assessment processes'. A spokesperson for Ofcom said: 'We've been clear that services should tell us who completed, reviewed and approved their risk assessment. We are considering the concerns raised in this letter and will respond in due course.' Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Meta said the letter deliberately misstated the company's approach on safety and it was committed to high standards and complying with regulations. 'We are not using AI to make decisions about risk,' said a Meta spokesperson. 'Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.' The Molly Rose Foundation organised the letter after NPR, a US broadcaster, reported last month that updates to Meta's algorithms and new safety features will mostly be approved by an AI system and no longer scrutinised by staffers. According to one former Meta executive, who spoke to NPR anonymously, the change will allow the company to launch app updates and features on Facebook, Instagram and WhatsApp more quickly but would create 'higher risks' for users, because potential problems are less likely to be prevented before a new product is released to the public. NPR also reported that Meta was considering automating reviews for sensitive areas including youth risk and monitoring the spread of falsehoods.

Gerry Adams's lawyer to pursue chatbots for libel
Gerry Adams's lawyer to pursue chatbots for libel

Telegraph

time11 hours ago

  • Telegraph

Gerry Adams's lawyer to pursue chatbots for libel

The high-profile media lawyer who represented Gerry Adams in his libel trial against the BBC is now preparing to sue the world's most powerful AI chatbots for defamation. As one of the most prominent libel lawyers in the UK, Paul Tweed said that artificial intelligence was the 'new battleground' in trying to prevent misinformation about his clients from being spread online. Mr Tweed is turning his attention to tech after he recently helped the former Sinn Fein leader secure a €100,000 (£84,000) payout over a BBC documentary that falsely claimed he sanctioned the murder of a British spy. The Belfast-based solicitor said he was already building a test case against Meta that could trigger a flurry of similar lawsuits, as he claims to have exposed falsehoods shared by chatbots on Facebook and Instagram. It is not the first time tech giants have been sued for defamation over questionable responses spewed out by their chatbots. Robby Starbuck, the US activist known for targeting diversity schemes at major companies, has sued Meta for defamation alleging that its AI chatbot spread a number of false claims about him, including that he took part in the Capitol riots. A Norwegian man also filed a complaint against OpenAI after its ChatGPT software incorrectly stated that he had killed two of his sons and been jailed for 21 years. Mr Tweed, who has represented celebrities such as Johnny Depp, Harrison Ford and Jennifer Lopez, said: 'My pet subject is generative AI and the consequences of them repeating or regurgitating disinformation and misinformation.' He believes statements put out by AI chatbots fall outside the protections afforded to social media companies, which have traditionally seen them avoid liability for libel. If successful, Mr Tweed will expose social media companies that have previously argued they should not be responsible for claims made on their platforms because they are technology companies rather than traditional publishers. Mr Tweed said: 'I've been liaising with a number of well-known legal professors on both sides of the Atlantic and they agree that there's a very strong argument that generative AI will fall outside the legislative protections.' The lawyer said that chatbots are actually creating new content, meaning they should be considered publishers. He said that the decision by many tech giants to move their headquarters to Ireland for lower tax rates had also opened them up to being sued in Dublin's high courts, where libel cases are typically decided by a jury. This setup is often seen as more favourable to claimants, which Mr Tweed himself says has fuelled a wave of 'libel tourism' in Ireland. He also said Dublin's high courts are attractive as a lower price option compared to London, where he said the costs of filing libel claims are 'eye-watering'. He said: 'I think it's absurd now, the level of costs that are being claimed. The libel courts in London are becoming very, very expensive and highly risky now. The moment you issue your claim form, the costs go into the stratosphere. 'It's not in anyone's interest for people to be deprived of access to justice. It will get to the point where nobody sues for libel unless you're a billionaire.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store