Latest news with #GlobalAntiScamAlliance


Forbes
3 days ago
- Business
- Forbes
Google Issues New $1 Trillion Threat Security Advisory
Google issues new scam threats advisory. dpa/picture alliance via Getty Images Whether it's the FBI warning about smartphone attacks leveraging fears of deportation in the U.S. foreign student population, recommendations to use a secret code as AI-powered phishing campaigns evolve, instant takeover attacks targeting Meta and PayPal users, or confirmed threats aimed at compromising your Gmail account, there is no escaping the cyber-scammers. Indeed, the Global Anti-Scam Alliance, whose advisory board includes the head of scam prevention at Amazon, Microsoft's director of fraud and abuse risk, and the vice president of security solutions with Mastercard, found that more than $1 trillion was lost globally to such fraud in 2024. But do not despair, despite the Federal Trade Commission warning of a 25% year-on-year increase in losses, Google is fighting back. Here's what you need to know. There can be no doubt that online scams, of all flavors, are not only increasing in volume, but they are also evolving. We've seen evidence of this in the increasing availability and cost-effectiveness of employing AI to empower such threat campaigns. No longer the sole stomping ground of solo actors and chancers looking to make a few bucks here and there, the scams threat landscape is now dominated by organized international groups operating at scale. The boundary between online and physical, offline fraud is blurring. Hybrid campaigns are a reality, combining phone calls with internet calls to action. The Global Anti-Scam Alliance State of Scams Report, published in November 2024, revealed the true cost of such crimes: $1.03 trillion globally in just 12 months. A March 2025 report from the Federal Trade Commission showed that U.S. consumers alone had lost $12.5 billion last year, up 25% from 2023. And that GASA report also found that only 4% of victims worldwide reported being able to recover their losses. Something has to be done, and Google's Trust and Safety teams, responsible for tracking and fighting scams of all kinds, are determined that they are the people to help do it. 'Scammers are more effective and act without fear of punishment when people are uninformed about fraud and scam tactics,' Karen Courington, Google's vice president of consumer trusted experiences, trust & safety, said. In addition to tracking and defending against scams, Google's dedicated teams also aim to inform consumers by analyzing threats and sharing their observations, along with mitigation advice. The May 27 Google fraud and scams advisory, does just that, describing the most pressing of recent attack trends that have been identified. These are broken down into five separate scams, each complete with mitigating best practice recommendations, as follows: Customer support scams, often displaying fake phone numbers while pretending to be legitimate help services, are evolving and exploiting victims through a combination of social engineering and web vulnerabilities, Google warned. Along with the protection offered by Gemini Nano on-device to identify dangerous sites and scams, Google advised users should 'seek out official support channels directly, avoid unsolicited contacts or pop-ups and always verify phone numbers for authenticity." Malicious advertising scams, often employing the use of lures including free or cracked productivity software and games, have also evolved. 'Scammers are setting their sights on more sophisticated users,' Courington said, 'those with valuable assets like crypto wallets or individuals with significant online influence.' Google uses AI and human reviews to combat the threat and block ad accounts involved in such activity. Only download software from official sources, beware of too good to be true offers, and pay particular attention browser warnings when they appear, Google said. Google's teams have seen an increase in fake travel websites as the summer vacations get closer, usually luring victims with cheap prices and unbelievable experiences. Again, these will likely impersonate well-known brands, hotels, and agencies. Google advised users to use its tools such as 'about this result' to verify website authenticity. 'Avoid payment methods such as wire transfers or direct bank deposits,' Courington said, 'especially if requested via email or phone.' The old chestnut of package tracking scams has not vanished, more's the pity. 'These scams often trick users into paying additional fees that real delivery services would never request,' Courington explained. Google has seen these scammers employing a tactic whereby the websites and messages used are changed dynamically, based on when the link is sent to the victim. Scam detection in Google Messages has been deployed as one level of protection by Google, but Courington also recommended users should verify the status of any expected package with the shipping company or seller rather than by a link from an unknown source. And finally, there's also no escaping the road toll scams that continue to appear. 'A toll road scam involves scammers sending fraudulent text messages claiming that you owe unpaid toll fees,' Courington warned. Thankfully, these are not always the most realistic of threats, with Google analysts seeing users spammed by toll road fee claims in states that don't even have any toll roads. The best mitigating advice remains stopping to pause, count to ten, and ask yourself if the claim is a plausible one. If it is, then confirm it directly with the toll operator rather than via a link in a message.


CNN
08-05-2025
- CNN
Google says AI is making searching and browsing the web safer
Almost anyone who has used the internet has probably experienced that alarming moment when a window pops up claiming your device has a virus, encouraging you to click for tech support or download security software. It's a common online scam, and one that Google is aiming to fight more aggressively using artificial intelligence. Google says it's now using a version of its Gemini AI model that runs on users' devices to detect and warn users of these so-called 'tech support' scams. It's just one of a number of ways Google is using advancements in AI to better protect users from scams across Chrome, Search and its Android operating system, the company said in a blog post Thursday. The announcement comes as AI has enabled bad actors to more easily create large quantities of convincing, fake content — effectively lowering the barrier to carrying out scams that can be used to steal victims' money or personal information. Consumers worldwide lost more than $1 trillion to scams last year, according to the lobbying group Global Anti-Scam Alliance. So, Google and other organizations are increasingly using AI to fight scammers, too. Phiroze Parakh, senior director of engineering for Google Search, said that fighting scammers 'has always been an evolution game,' where bad actors learn and evolve as tech companies put new protections in place. 'Now, both sides have new tools,' Parakh said in an interview with CNN. 'So, there's this question of, how do you get to use this tool more effectively? Who is being a little more proactive about it?' Although Google has long used machine learning to protect its services, newer AI advancements have led to improved language understanding and pattern recognition, enabling the tech to identify scams faster and more effectively. Google said that on Chrome's 'enhanced protection' safe browsing mode on desktop, its on-device AI model can now effectively scan a webpage in real-time when a user clicks on it to look for potential threats. That matters because, sometimes, bad actors make their pages appear differently to Google's existing crawler tools for identifying scams than they do to users, a tactic called 'cloaking' that the company warned last year was on the rise. And because the model, called Gemini Nano, runs on your device, the service works faster and protects users' privacy, said Jasika Bawa, group product manager for Google Chrome. As with Chrome's existing safe browsing mode, if a user attempts to access a potentially unsafe site, they'll see a warning before being given the option to continue to the page. In another update, Google will warn Android users if they're receiving alerts from fishy sites in Chrome and let them automatically unsubscribe, so long as they have Chrome website notifications enabled. Google has also used AI to detect scammy results and prevent them from showing up in Search, regardless what kind of device users are on. Since Google Search first launched AI-powered versions of its anti-scam systems three years ago, it now blocks 20 times the number of problematic pages. 'We've seen this incredible advantage with our ability to understand language and nuance and relationships between entities that really made a change in how we detect these scammy actors,' he said, adding that in 2024 alone, the company removed hundreds of millions of scam search results daily because of the AI advancements. Parakh said, for example, that AI has made it better able to identify and remove a scam where bad actors create fake 'customer service' pages or phone numbers for airlines. Google says it has has now decreased scam attacks in airline-related searches by 80%. Google isn't the only company using AI to fight bad actors. British mobile phone company O2 said last year it was fighting phone scammers with 'Daisy,' a conversational AI chatbot meant to keep fraudsters on the phone, giving them less time to talk with would-be human victims. Microsoft has also piloted a tool that uses AI to analyze phone conversations to determine whether a call may be fraudulent and alert the user accordingly. And the US Treasury Department said last year that AI had helped it identify and recover $1 billion worth of check fraud in fiscal 2024 alone.


CNN
08-05-2025
- CNN
Google says AI is making searching and browsing the web safer
Almost anyone who has used the internet has probably experienced that alarming moment when a window pops up claiming your device has a virus, encouraging you to click for tech support or download security software. It's a common online scam, and one that Google is aiming to fight more aggressively using artificial intelligence. Google says it's now using a version of its Gemini AI model that runs on users' devices to detect and warn users of these so-called 'tech support' scams. It's just one of a number of ways Google is using advancements in AI to better protect users from scams across Chrome, Search and its Android operating system, the company said in a blog post Thursday. The announcement comes as AI has enabled bad actors to more easily create large quantities of convincing, fake content — effectively lowering the barrier to carrying out scams that can be used to steal victims' money or personal information. Consumers worldwide lost more than $1 trillion to scams last year, according to the lobbying group Global Anti-Scam Alliance. So, Google and other organizations are increasingly using AI to fight scammers, too. Phiroze Parakh, senior director of engineering for Google Search, said that fighting scammers 'has always been an evolution game,' where bad actors learn and evolve as tech companies put new protections in place. 'Now, both sides have new tools,' Parakh said in an interview with CNN. 'So, there's this question of, how do you get to use this tool more effectively? Who is being a little more proactive about it?' Although Google has long used machine learning to protect its services, newer AI advancements have led to improved language understanding and pattern recognition, enabling the tech to identify scams faster and more effectively. Google said that on Chrome's 'enhanced protection' safe browsing mode on desktop, its on-device AI model can now effectively scan a webpage in real-time when a user clicks on it to look for potential threats. That matters because, sometimes, bad actors make their pages appear differently to Google's existing crawler tools for identifying scams than they do to users, a tactic called 'cloaking' that the company warned last year was on the rise. And because the model, called Gemini Nano, runs on your device, the service works faster and protects users' privacy, said Jasika Bawa, group product manager for Google Chrome. As with Chrome's existing safe browsing mode, if a user attempts to access a potentially unsafe site, they'll see a warning before being given the option to continue to the page. In another update, Google will warn Android users if they're receiving alerts from fishy sites in Chrome and let them automatically unsubscribe, so long as they have Chrome website notifications enabled. Google has also used AI to detect scammy results and prevent them from showing up in Search, regardless what kind of device users are on. Since Google Search first launched AI-powered versions of its anti-scam systems three years ago, it now blocks 20 times the number of problematic pages. 'We've seen this incredible advantage with our ability to understand language and nuance and relationships between entities that really made a change in how we detect these scammy actors,' he said, adding that in 2024 alone, the company removed hundreds of millions of scam search results daily because of the AI advancements. Parakh said, for example, that AI has made it better able to identify and remove a scam where bad actors create fake 'customer service' pages or phone numbers for airlines. Google says it has has now decreased scam attacks in airline-related searches by 80%. Google isn't the only company using AI to fight bad actors. British mobile phone company O2 said last year it was fighting phone scammers with 'Daisy,' a conversational AI chatbot meant to keep fraudsters on the phone, giving them less time to talk with would-be human victims. Microsoft has also piloted a tool that uses AI to analyze phone conversations to determine whether a call may be fraudulent and alert the user accordingly. And the US Treasury Department said last year that AI had helped it identify and recover $1 billion worth of check fraud in fiscal 2024 alone.


CNN
08-05-2025
- CNN
Google says AI is making searching and browsing the web safer
Almost anyone who has used the internet has probably experienced that alarming moment when a window pops up claiming your device has a virus, encouraging you to click for tech support or download security software. It's a common online scam, and one that Google is aiming to fight more aggressively using artificial intelligence. Google says it's now using a version of its Gemini AI model that runs on users' devices to detect and warn users of these so-called 'tech support' scams. It's just one of a number of ways Google is using advancements in AI to better protect users from scams across Chrome, Search and its Android operating system, the company said in a blog post Thursday. The announcement comes as AI has enabled bad actors to more easily create large quantities of convincing, fake content — effectively lowering the barrier to carrying out scams that can be used to steal victims' money or personal information. Consumers worldwide lost more than $1 trillion to scams last year, according to the lobbying group Global Anti-Scam Alliance. So, Google and other organizations are increasingly using AI to fight scammers, too. Phiroze Parakh, senior director of engineering for Google Search, said that fighting scammers 'has always been an evolution game,' where bad actors learn and evolve as tech companies put new protections in place. 'Now, both sides have new tools,' Parakh said in an interview with CNN. 'So, there's this question of, how do you get to use this tool more effectively? Who is being a little more proactive about it?' Although Google has long used machine learning to protect its services, newer AI advancements have led to improved language understanding and pattern recognition, enabling the tech to identify scams faster and more effectively. Google said that on Chrome's 'enhanced protection' safe browsing mode on desktop, its on-device AI model can now effectively scan a webpage in real-time when a user clicks on it to look for potential threats. That matters because, sometimes, bad actors make their pages appear differently to Google's existing crawler tools for identifying scams than they do to users, a tactic called 'cloaking' that the company warned last year was on the rise. And because the model, called Gemini Nano, runs on your device, the service works faster and protects users' privacy, said Jasika Bawa, group product manager for Google Chrome. As with Chrome's existing safe browsing mode, if a user attempts to access a potentially unsafe site, they'll see a warning before being given the option to continue to the page. In another update, Google will warn Android users if they're receiving alerts from fishy sites in Chrome and let them automatically unsubscribe, so long as they have Chrome website notifications enabled. Google has also used AI to detect scammy results and prevent them from showing up in Search, regardless what kind of device users are on. Since Google Search first launched AI-powered versions of its anti-scam systems three years ago, it now blocks 20 times the number of problematic pages. 'We've seen this incredible advantage with our ability to understand language and nuance and relationships between entities that really made a change in how we detect these scammy actors,' he said, adding that in 2024 alone, the company removed hundreds of millions of scam search results daily because of the AI advancements. Parakh said, for example, that AI has made it better able to identify and remove a scam where bad actors create fake 'customer service' pages or phone numbers for airlines. Google says it has has now decreased scam attacks in airline-related searches by 80%. Google isn't the only company using AI to fight bad actors. British mobile phone company O2 said last year it was fighting phone scammers with 'Daisy,' a conversational AI chatbot meant to keep fraudsters on the phone, giving them less time to talk with would-be human victims. Microsoft has also piloted a tool that uses AI to analyze phone conversations to determine whether a call may be fraudulent and alert the user accordingly. And the US Treasury Department said last year that AI had helped it identify and recover $1 billion worth of check fraud in fiscal 2024 alone.


Fast Company
08-05-2025
- Business
- Fast Company
AI scam calls are getting smarter. Here's how telecoms are fighting back
Scam calls are turning the world on its head. The Global Anti-Scam Alliance estimates that scammers stole a staggering $1.03 trillion globally in 2023, including losses from online fraud and scam calls. Robocalls and phone scams have long been a frustrating—and often dangerous—problem for consumers. Now, artificial intelligence is elevating the threat, making scams more deceptive, efficient, and harder to detect. While Eric Priezkalns, an analyst and editor at Commsrisk, believes the impact of AI on scam calls is currently exaggerated, he notes that the use of AI by scammers is focused on producing fake content, which looks real or on varying the content in messages designed to lure potential victims into malicious conversations. 'Varying the content makes it much more difficult to identify and block scams using traditional anti-scam controls,' he tells Fast Company. From AI-generated deepfake voices that mimic loved ones to large-scale fraud operations that use machine learning to evade detection, bad actors are exploiting AI to supercharge these scam calls. The big question is: How can the telecom industry combat this problem head-on before fraudsters wreak even more havoc? SCAMMERS ARE UPGRADING THEIR PLAYBOOK WITH AI Until recently, phone scams mostly relied on crude robocalls—prerecorded messages warning recipients about an urgent financial issue or a supposed problem with their Social Security number. These tactics, while persistent, were often easy to recognize. But today's AI-powered scams are far more convincing. One of the most alarming developments is the use of AI-generated voices, which make scams feel disturbingly personal. In a chilling case from April 2023, a mother in Arizona received a desperate call from what sounded exactly like her daughter, sobbing and pleading for help. A scammer, posing as a kidnapper, demanded ransom money. In reality, the daughter was safe—the criminals had used AI to clone her voice from a social media video. These scams, known as ' voice cloning fraud,' have surged in recent months. With just a few seconds of audio, AI tools can now create an eerily realistic digital clone of a person's voice, enabling fraudsters to impersonate friends, family members, or even executives in corporate scams. Scammers are also using AI to analyze vast amounts of data and fine-tune their schemes with chilling precision. Machine learning algorithms can sift through public information—social media posts, online forums, and data breaches—to craft hyper-personalized scam calls. Instead of a generic IRS or tech support hoax, fraudsters can now target victims with specific details about their purchases, travel history, or even medical conditions. AI is also enhancing caller ID spoofing, allowing scammers to manipulate phone numbers to appear as if they are coming from local businesses, government agencies, or even a victim's own contacts. This increases the likelihood that people will pick up, making scam calls harder to ignore. TELECOM'S COUNTEROFFENSIVE: AI VS. AI As fraudsters sharpen their AI tools, telecom companies and regulators are fighting back with artificial intelligence of their own—deploying advanced systems to detect, trace, and block malicious calls before they ever reach consumers. 1. Call authentication and AI-based fraud detection To combat spoofing, telecom carriers are leveraging AI-powered voice analysis and authentication technologies. In the U.S., the STIR/SHAKEN framework uses cryptographic signatures to verify that calls originate from legitimate sources. But as scammers quickly adapt, AI-driven fraud detection is becoming essential. Machine learning models trained on billions of call patterns can analyze real-time metadata to flag anomalies—such as sudden spikes in calls from specific regions or numbers linked to known scams. These AI systems can even detect subtle acoustic markers typical of deepfake-generated voices, helping stop fraudulent calls before they connect. 2. Carrier-level call filtering and blocking Major telecom providers are embedding AI-powered call filtering directly into their networks. AT&T's Call Protect, T-Mobile's Scam Shield, and Verizon's Call Filter all use AI to spot suspicious patterns and block high-risk calls before they reach users. The GSMA's Call Check and International Revenue Share Fraud (IRSF) solutions also provide real-time call protection by verifying legitimacy and combating calling line identity spoofing. For context, GSMA's IRSF Prevention leverages first-party International Premium Rate Numbers (IPRN) data and an advanced OSINT (open-source intelligence) platform to deliver real-time, actionable fraud intelligence. It tracks over 20 million IPRNs, hijacked routes, and targeted networks—helping telecoms proactively combat IRSF and Wangiri fraud. 3. AI-powered voice biometrics for caller verification Another promising line of defense against AI-generated fraud is voice biometrics. Some financial institutions and telecom providers are deploying voice authentication systems that analyze more than 1,000 unique vocal characteristics to verify a caller's identity. Unlike basic voice recognition, these advanced systems can detect when an AI-generated voice is being used—effectively preventing fraudsters from impersonating legitimate customers. REGULATORS ARE CRACKING DOWN, BUT IS IT ENOUGH? It's one thing to tighten regulations and stiffen penalties—something many government agencies around the world are already doing—but effectively enforcing those regulations is a different ball game altogether. In the U.S., for example, the FCC (Federal Communications Commission) has ramped up penalties for illegal robocalls and is pushing carriers to adopt stricter AI-powered defenses. The TRACED (Telephone Robocall Abuse Criminal Enforcement and Deterrence) Act, signed into law in 2019, gives regulators more power to fine scammers and mandates stronger anti-spoofing measures. Internationally, regulators in the U.K., Canada, and Australia are working on similar AI-driven frameworks to protect consumers from rising fraud. The European Union has introduced stricter data privacy laws, limiting how AI can be used to harvest personal data for scam operations. However, enforcement struggles to keep pace with the speed of AI innovation. Scammers operate globally, often beyond the jurisdiction of any single regulator. Many fraud rings are based in countries where legal action is difficult—if not nearly impossible. Take, for example, countries like Myanmar, Cambodia, and Laos, where organized crime groups have established cyber scam centers that use AI-powered deepfakes to deceive victims worldwide. Operators in these scam centers frequently relocate or shift tactics to stay ahead of law enforcement. They also operate in regions with complex jurisdictional challenges, further complicating enforcement. Scammers thrive on fragmentation and exploit vulnerabilities—whether that's a lack of industry coordination or differing regulatory approaches across borders. These regulatory bottlenecks underscore why telecom providers must take a more proactive role in combating AI-driven fraud, rather than relying solely on traditional frameworks which—while helpful—are not always efficient. That's where the GSMA Call Check technology, developed by German telecom solutions provider Oculeus, could play a vital role. 'The GSMA's Call Check services provide a simple, fast and low-cost mechanism for the exchange of information about scam phone calls as they occur. This technology is rooted in the cloud, making it future-proof and global in a way that other methods being contemplated by some nations will never be,' Commsrisk 's Priezkalns says. FAR FROM OVER Without question, the battle against AI-powered scams is far from over. As former FCC Chair Jessica Rosenworcel noted last year: 'We know that AI technologies will make it cheap and easy to flood our networks with deepfakes used to mislead and betray trust.' The good news is that the telecom industry isn't backing down. While scammers are using AI to deceive unsuspecting individuals, the industry is also leveraging AI to protect customers and their sensitive data—through automated call screening, real-time fraud detection, and enhanced authentication measures. But according to Priezkalns, technology alone isn't enough to protect people. For him, deterrence—driven by the legal prosecution of scammers—is just as important as technological solutions. 'It needs to be used in conjunction with law enforcement agencies that proactively arrest scammers and legal systems that ensure scammers are punished for their crimes,' he says. One thing is certain: Scammers and scams aren't going away anytime soon. As Priezkalns points out, people will continue to fall for scams even with high-intensity public awareness training. But as AI continues to evolve, the telecom industry must stay a step ahead—ensuring it becomes a force for protection, not deception. And with tools like the GSMA's Call Check, that future is within reach.