logo
Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

Euronewsa day ago
WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam' centres targeting people online around that world, its parent company Meta said.
The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams.
In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams, including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding.
Scams are becoming all too common and increasingly sophisticated in today's digital world. Too-good-to-be-true offers and unsolicited messages attempt to steal consumers' information or money, with scams filling our phones, social media, and other corners of the internet each day.
Meta noted that 'some of the most prolific' sources of scams are criminal scam centres, which often span from forced labour operated by organised crime – and warned that such efforts often target people on many platforms at once, in attempts to evade detection.
That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, Meta said.
Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps – as well as TikTok, Telegram, and AI-generated messages made using ChatGPT – to offer payments for fake likes, to enlist people into a pyramid scheme, or to lure others into cryptocurrency investments.
Meta linked these scams to a criminal scam center in Cambodia and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sweden's leader uses ChatGPT. Should politicians use AI chatbots?
Sweden's leader uses ChatGPT. Should politicians use AI chatbots?

Euronews

time3 hours ago

  • Euronews

Sweden's leader uses ChatGPT. Should politicians use AI chatbots?

Swedish Prime Minister Ulf Kristersson has stirred up public debate over politicians' use of artificial intelligence (AI) after telling local media he uses ChatGPT to brainstorm and seek a 'second opinion' on how to run the country. Kristersson told the Swedish newspaper Dagens Industri that he uses ChatGPT and the French service LeChat, and that his colleagues also use AI in their everyday work. 'I use it myself quite often, if for nothing else than for a second opinion. 'What have others done? And should we think the complete opposite?' Those types of questions,' he said. The comment sparked backlash, with critics arguing that voters had elected Kristersson, not ChatGPT, to lead Sweden. Technology experts in Sweden have since raised concerns about politicians using AI tools in such a way, citing the risk of making political decisions based on inaccurate information. Large language models' (LLMs) training data can be incomplete or biased, causing chatbots to give incorrect answers or so-called 'hallucinations'. 'Getting answers from LLMs is cheap, but reliability is the biggest bottleneck,' Yarin Gal, an associate professor of machine learning at the University of Oxford, previously told Euronews Next. Experts were also concerned about sensitive state information being used to train later models of ChatGPT, which is made by OpenAI. Its servers are based in the United States. Kristersson's press team brushed aside security concerns. 'Of course, it's not security-sensitive information that ends up there. It's used more as a sounding board,' Tom Samuelsson, Kristersson's press secretary, told the newspaper Aftonbladet. Should politicians use AI chatbots? This is not the first time a politician has been placed under fire due to their use of AI – or even the first time in Sweden. Last year, Olle Thorell, a Social Democrat in Sweden's parliament, used ChatGPT to write 180 written questions to the country's ministers. He faced criticism of overburdening ministers' staff, as they are required to answer within a set time frame. Earlier this year, United Kingdom tech secretary Peter Kyle's use of ChatGPT came under fire after the British magazine, New Scientist revealed he had asked the chatbot why AI adoption is so slow in the UK business community and which podcasts he should appear on to 'reach a wide audience that's appropriate for ministerial responsibilities'. Some politicians make no secret of their AI use. In a newspaper column, Scottish Member of Parliament Graham Leadbitter said he uses AI to write speeches because it helps him sift through dense reading and gives him 'a good basis to work from' – but emphasised that he still calls the shots. 'I choose the subject matter, I choose the evidence I want it to access, I ask for a specific type of document, and I check what's coming out accords with what I want to achieve,' Leadbitter wrote in The National. And in 2024, the European Commission rolled out its own generative AI tool, called GPT@EC, to help staff draft and summarise documents on an experimental basis. ChatGPT available to US public servants Meanwhile, OpenAI announced a partnership this week with the US government to grant the country's entire federal workforce access to ChatGPT Enterprise at the nominal cost of $1 for the next year. The announcement came shortly after the Trump administration launched its AI Action Plan, which aims to expand AI use across the federal government to boost efficiency and slash time spent on paperwork, among other initiatives. In a statement, OpenAI said the programme would involve 'strong guardrails, high transparency, and deep respect' for the 'public mission' of federal government workers. The company said it has seen the benefits of using AI in the public sector through its pilot programme in Pennsylvania, where public servants reportedly saved an average of about 95 minutes per day on routine tasks using ChatGPT. 'Whether managing complex budgets, analysing threats to national security, or handling day-to-day operations of public offices, all public servants deserve access to the best technology available,' OpenAI said.

Trump declares 100% computer chip tariff unless firms build in the US
Trump declares 100% computer chip tariff unless firms build in the US

Euronews

time7 hours ago

  • Euronews

Trump declares 100% computer chip tariff unless firms build in the US

President Donald Trump said on Wednesday that he would impose a 100% tariff on computer chips, only sparing companies that commit to 'building' on US soil. The threat raises the prospect of higher prices for essential products dependent on the processors, and it will squeeze US tech firms, often reliant on Asia for chips. It also comes more than three months after Trump temporarily exempted most electronics from his most onerous tariffs. The president announced the tariff alongside Apple CEO Tim Cook on Wednesday, who said his firm would invest an additional $100 billion in domestic manufacturing. That comes on top of a previous commitment made in February, bringing the total to $600bn. The pledge follows similar announcements from companies such as TSMC and Nvidia, who have promised to spend more in the US. Big Tech has already made collective commitments to invest about $1.5 trillion in the country since Trump moved back into the White House in January. Now the question is whether the deal brokered between Cook and Trump will be enough to insulate the millions of iPhones made in China and India from the tariffs that the administration has already imposed and reduce the pressure on the company to raise prices on the new models expected to be unveiled next month. Wall Street seems to think so. After Apple's stock price gained 5% in Wednesday regular trading sessions, the shares rose by more than 2% in extended trading after Trump made his exemption announcement. The shares of AI chipmaker Nvidia, which has also recently made big commitments to the US, rose marginally in extended trading to add to the $1 trillion gain in market value the Silicon Valley company has made since the start of Trump's second administration. Demand for computer chips has been climbing worldwide, with sales increasing 19.6% in the year-ended in June, according to the World Semiconductor Trade Statistics organisation. Trump's tariff threats mark a significant break from existing plans to revive computer chip production in the US that were drawn up during the administration of President Joe Biden. Since taking over from Biden, Trump has been deploying tariffs to incentivise more domestic production. Essentially, the president is betting that the threat of dramatically higher chip costs will force most companies to open factories domestically, despite the risk that tariffs could squeeze corporate profits and push up consumer prices. By contrast, the bipartisan CHIPS and Science Act that Biden signed into law in 2022 provided more than $5bn to support new computer chip plants, fund research, and train workers for the industry. The mix of funding support, tax credits and other financial incentives were meant to draw in private investment, a strategy that Trump has vocally opposed.

Grok, is that Gaza? AI image checks mislocate news photographs
Grok, is that Gaza? AI image checks mislocate news photographs

France 24

time19 hours ago

  • France 24

Grok, is that Gaza? AI image checks mislocate news photographs

But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018. In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025. Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP. Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Radical right bias Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT". Each AI has biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI does not necessarily seek accuracy -- "that's not the goal," the expert said. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. 'Friendly pathological liar' An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store