logo
#

Latest news with #GlobalAntiScamAlliance

BioCatch takes on social engineering scam market
BioCatch takes on social engineering scam market

Finextra

time17-07-2025

  • Business
  • Finextra

BioCatch takes on social engineering scam market

BioCatch, which prevents financial crime by recognizing patterns in human behavior, today announced the launch of the latest edition of its behavior-based scam-fighting tool: BioCatch Scams360. 0 This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author. For the first time ever, financial institutions will now be able to consistently identify and prevent the majority of authorized push payment (APP) fraud in real time. APP fraud relies on the psychological manipulation of victims to willingly transfer their money away to financial criminals. Banks deploying BioCatch's latest scam-detection solution, Scams360, provide their customers with unprecedented, purpose-built protection against romance, investment, business-email-compromise, purchase, impersonation, and other social engineering scams that continue to proliferate in both scale and sophistication in every country in the world. 'Already, we're seeing a 50% improvement in our ability to detect non-impersonation scams,' BioCatch Chief Product Officer Ayelet Eliezer said. 'Scams360's current alert rate – the percentage of total transactions requiring banks to intervene – is also best-in-class, helping banks deploying Scams360 to keep their operational costs low while stopping more scams in real time, before any money leaves the would-be victim's account.' The Global Anti-Scam Alliance estimates scams now account for more than $1 trillion in consumer losses every year. And that total continues to grow. The GenAI era has revolutionized the creation, execution, reach, targeting, and refinement of social engineering scams and promises further – and potentially exponential – improvement of all those scam metrics in the future. Legacy defenses – often based on transaction data or device and network intelligence – fail to detect modern scams that manipulate their victims. In most cases, those social engineering scam victims willingly authorize their scam payments, eluding the detection of traditional fraud systems. Scams360 leverages BioCatch's market-leading behavioral and device intelligence to arm financial institutions with the contextual understanding needed to distinguish genuine user intent from signs of manipulation. Examples might include the speed of the user's typing, how quickly they respond to prompts, unusual mouse behavior, hesitation, erratic inputting of information, mouse-doodling, prolonged periods of inactivity, the presence of malicious apps, and/or an active phone call during an online banking session. BioCatch tracks as many as 3,000 of these different behavioral and device-related datapoints to distinguish the legitimate from the criminal. 'Over the past few years, we've seen innovations in payment channels, financial technology, and e-commerce lead to significant changes in consumer financial behaviors,' said Suzanne Sando, Lead Analyst of Fraud Management at Javelin Strategy & Research. 'The rapid evolution of these consumer activities presents criminals with numerous opportunities to identify weaknesses in scam-detection and better target victims. Financial criminals are clearly exploiting the growing attack surface. It's time for the financial services industry to regain control from fraudsters and invest in more modern and advanced methods for scam prevention and detection.' Scams360 builds upon BioCatch's success in detecting impersonation scams (one regional bank in the U.S. that works with BioCatch stopped $100 million in impersonation scam payments in 2024 alone), giving BioCatch customers broader protection against the growing range of social engineering scam typologies. 'We are excited to see more innovation out of BioCatch to combat the global increase in scams,' The Knoble Founder and Board Chair Ian Mitchell said. 'BioCatch is leading a growing list of solution providers working to protect banking customers and communities from the increased complexity of scams.' BioCatch and the Knoble recently partnered to launch an anti-scam guide and cost calculator, highlighting how the cost of scams extends well beyond direct losses, driving up operational expenses, customer churn, compliance exposure, and reputational risk. On Sept. 4, the Knoble will host a virtual roundtable on scam-prevention. BioCatch now analyzes more than 15 billion user sessions per month, protecting more than 500 million people and 1.5 billion devices around the world from fraud and financial crime, while stopping an estimated $3.7 billion in fraudulent transactions in 2024.

Google Issues Critical New Threat Advisory — Take Action Now
Google Issues Critical New Threat Advisory — Take Action Now

Forbes

time02-06-2025

  • Business
  • Forbes

Google Issues Critical New Threat Advisory — Take Action Now

Google issues new scam threats advisory. Update, June 1, 2025: This story, originally published May 30, has been updated to include a new strategic method of cutting at least some of the phishing threat off at its source, in response to the latest Google scam warnings. Whether it's the FBI warning about smartphone attacks leveraging fears of deportation in the U.S. foreign student population, recommendations to use a secret code as AI-powered phishing campaigns evolve, instant takeover attacks targeting Meta and PayPal users, or confirmed threats aimed at compromising your Gmail account, there is no escaping the cyber-scammers. Indeed, the Global Anti-Scam Alliance, whose advisory board includes the head of scam prevention at Amazon, Microsoft's director of fraud and abuse risk, and the vice president of security solutions with Mastercard, found that more than $1 trillion was lost globally to such fraud in 2024. But do not despair, despite the Federal Trade Commission warning of a 25% year-on-year increase in losses, Google is fighting back. Here's what you need to know. There can be no doubt that online scams, of all flavors, are not only increasing in volume, but they are also evolving. We've seen evidence of this in the increasing availability and cost-effectiveness of employing AI to empower such threat campaigns. No longer the sole stomping ground of solo actors and chancers looking to make a few bucks here and there, the scams threat landscape is now dominated by organized international groups operating at scale. The boundary between online and physical, offline fraud is blurring. Hybrid campaigns are a reality, combining phone calls with internet calls to action. The Global Anti-Scam Alliance State of Scams Report, published in November 2024, revealed the true cost of such crimes: $1.03 trillion globally in just 12 months. A March 2025 report from the Federal Trade Commission showed that U.S. consumers alone had lost $12.5 billion last year, up 25% from 2023. And that GASA report also found that only 4% of victims worldwide reported being able to recover their losses. Something has to be done, and Google's Trust and Safety teams, responsible for tracking and fighting scams of all kinds, are determined that they are the people to help do it. 'Scammers are more effective and act without fear of punishment when people are uninformed about fraud and scam tactics,' Karen Courington, Google's vice president of consumer trusted experiences, trust & safety, said. In addition to tracking and defending against scams, Google's dedicated teams also aim to inform consumers by analyzing threats and sharing their observations, along with mitigation advice. The May 27 Google fraud and scams advisory, does just that, describing the most pressing of recent attack trends that have been identified. These are broken down into five separate scams, each complete with mitigating best practice recommendations, as follows: Customer support scams, often displaying fake phone numbers while pretending to be legitimate help services, are evolving and exploiting victims through a combination of social engineering and web vulnerabilities, Google warned. Along with the protection offered by Gemini Nano on-device to identify dangerous sites and scams, Google advised users should 'seek out official support channels directly, avoid unsolicited contacts or pop-ups and always verify phone numbers for authenticity." Malicious advertising scams, often employing the use of lures including free or cracked productivity software and games, have also evolved. 'Scammers are setting their sights on more sophisticated users,' Courington said, 'those with valuable assets like crypto wallets or individuals with significant online influence.' Google uses AI and human reviews to combat the threat and block ad accounts involved in such activity. Only download software from official sources, beware of too good to be true offers, and pay particular attention browser warnings when they appear, Google said. Google's teams have seen an increase in fake travel websites as the summer vacations get closer, usually luring victims with cheap prices and unbelievable experiences. Again, these will likely impersonate well-known brands, hotels, and agencies. Google advised users to use its tools such as 'about this result' to verify website authenticity. 'Avoid payment methods such as wire transfers or direct bank deposits,' Courington said, 'especially if requested via email or phone.' The old chestnut of package tracking scams has not vanished, more's the pity. 'These scams often trick users into paying additional fees that real delivery services would never request,' Courington explained. Google has seen these scammers employing a tactic whereby the websites and messages used are changed dynamically, based on when the link is sent to the victim. Scam detection in Google Messages has been deployed as one level of protection by Google, but Courington also recommended users should verify the status of any expected package with the shipping company or seller rather than by a link from an unknown source. And finally, there's also no escaping the road toll scams that continue to appear. 'A toll road scam involves scammers sending fraudulent text messages claiming that you owe unpaid toll fees,' Courington warned. Thankfully, these are not always the most realistic of threats, with Google analysts seeing users spammed by toll road fee claims in states that don't even have any toll roads. The best mitigating advice remains stopping to pause, count to ten, and ask yourself if the claim is a plausible one. If it is, then confirm it directly with the toll operator rather than via a link in a message. There are some people who just demand to be listened to, not through the loudness of their voice or the position of power they find themselves in, but rather because of the sheer experience they bring to the table. When it comes to the phishing threat, one of these people has to be Paul Walsh. I have been around the online business more than long enough to remember when, in 2004, Walsh was tasked with refining the World Wide Web creator, Tim Berners-Lee's, vision of one web. This was when the W3C Mobile Web Initiative was co-founded by Walsh, who also happened to be head of the New Technologies Team at AOL in the 90s. See, I told you I had been around a long time, and AOL wasn't even my first rodeo on the internet. The point being that Walsh has huge experience when it comes to the phishing threat, having helped launch AOL's Instant Messenger AIM client and becoming one of the first people online to fall victim to impersonation attacks as a result. But, it doesn't need there: 'When I co-founded the W3C standard for URL Classification and Content Labeling in 2004,' Walsh told me, 'I co-invented the very concept of classifying/labeling folders, user accounts, etc., on the web,' Walsh said. Now he's the CEO at MetaCert, a business that seeks to cut off the phishing threat directly at its source with a network-based solution for carriers to shield subscribers from SMS phishing attacks. Walsh told me that when it comes to phishing protection, threat intelligence is a fundamentally flawed method. 'Relying on historical data is useless—new URLs evade existing intelligence by design,' Walsh advised, adding that it is, in his opinion, the biggest threat in cybersecurity currently. While the advice from Google is certainly not to be ignored by users, in my never humble opinion, Walsh does not agree. Suspicious links and unexpected attachments, as red flags, Walsh claimed, are not only poor warning signs but positively harmful in 2025. With SMS taking over from email as the primary attack vector for phishing campaigns in 2024, Walsh said that 'authenticating URLs before delivery' is the only way to ensure they are safe, 'without relying on outdated historical data or AI.' I will say this: while I agree with a lot of what Walsh has to say, talking about phishing protections in terms of what needs to happen in the future doesn't help potential victims now. As such, I would not ignore the Google threat advisory. Adopt a zero-trust approach, don't click on any link in an email or text message, instead always go to the source yourself using your web browser. Authenticate everything.

Google Issues New $1 Trillion Threat Security Advisory
Google Issues New $1 Trillion Threat Security Advisory

Forbes

time30-05-2025

  • Business
  • Forbes

Google Issues New $1 Trillion Threat Security Advisory

Google issues new scam threats advisory. dpa/picture alliance via Getty Images Whether it's the FBI warning about smartphone attacks leveraging fears of deportation in the U.S. foreign student population, recommendations to use a secret code as AI-powered phishing campaigns evolve, instant takeover attacks targeting Meta and PayPal users, or confirmed threats aimed at compromising your Gmail account, there is no escaping the cyber-scammers. Indeed, the Global Anti-Scam Alliance, whose advisory board includes the head of scam prevention at Amazon, Microsoft's director of fraud and abuse risk, and the vice president of security solutions with Mastercard, found that more than $1 trillion was lost globally to such fraud in 2024. But do not despair, despite the Federal Trade Commission warning of a 25% year-on-year increase in losses, Google is fighting back. Here's what you need to know. There can be no doubt that online scams, of all flavors, are not only increasing in volume, but they are also evolving. We've seen evidence of this in the increasing availability and cost-effectiveness of employing AI to empower such threat campaigns. No longer the sole stomping ground of solo actors and chancers looking to make a few bucks here and there, the scams threat landscape is now dominated by organized international groups operating at scale. The boundary between online and physical, offline fraud is blurring. Hybrid campaigns are a reality, combining phone calls with internet calls to action. The Global Anti-Scam Alliance State of Scams Report, published in November 2024, revealed the true cost of such crimes: $1.03 trillion globally in just 12 months. A March 2025 report from the Federal Trade Commission showed that U.S. consumers alone had lost $12.5 billion last year, up 25% from 2023. And that GASA report also found that only 4% of victims worldwide reported being able to recover their losses. Something has to be done, and Google's Trust and Safety teams, responsible for tracking and fighting scams of all kinds, are determined that they are the people to help do it. 'Scammers are more effective and act without fear of punishment when people are uninformed about fraud and scam tactics,' Karen Courington, Google's vice president of consumer trusted experiences, trust & safety, said. In addition to tracking and defending against scams, Google's dedicated teams also aim to inform consumers by analyzing threats and sharing their observations, along with mitigation advice. The May 27 Google fraud and scams advisory, does just that, describing the most pressing of recent attack trends that have been identified. These are broken down into five separate scams, each complete with mitigating best practice recommendations, as follows: Customer support scams, often displaying fake phone numbers while pretending to be legitimate help services, are evolving and exploiting victims through a combination of social engineering and web vulnerabilities, Google warned. Along with the protection offered by Gemini Nano on-device to identify dangerous sites and scams, Google advised users should 'seek out official support channels directly, avoid unsolicited contacts or pop-ups and always verify phone numbers for authenticity." Malicious advertising scams, often employing the use of lures including free or cracked productivity software and games, have also evolved. 'Scammers are setting their sights on more sophisticated users,' Courington said, 'those with valuable assets like crypto wallets or individuals with significant online influence.' Google uses AI and human reviews to combat the threat and block ad accounts involved in such activity. Only download software from official sources, beware of too good to be true offers, and pay particular attention browser warnings when they appear, Google said. Google's teams have seen an increase in fake travel websites as the summer vacations get closer, usually luring victims with cheap prices and unbelievable experiences. Again, these will likely impersonate well-known brands, hotels, and agencies. Google advised users to use its tools such as 'about this result' to verify website authenticity. 'Avoid payment methods such as wire transfers or direct bank deposits,' Courington said, 'especially if requested via email or phone.' The old chestnut of package tracking scams has not vanished, more's the pity. 'These scams often trick users into paying additional fees that real delivery services would never request,' Courington explained. Google has seen these scammers employing a tactic whereby the websites and messages used are changed dynamically, based on when the link is sent to the victim. Scam detection in Google Messages has been deployed as one level of protection by Google, but Courington also recommended users should verify the status of any expected package with the shipping company or seller rather than by a link from an unknown source. And finally, there's also no escaping the road toll scams that continue to appear. 'A toll road scam involves scammers sending fraudulent text messages claiming that you owe unpaid toll fees,' Courington warned. Thankfully, these are not always the most realistic of threats, with Google analysts seeing users spammed by toll road fee claims in states that don't even have any toll roads. The best mitigating advice remains stopping to pause, count to ten, and ask yourself if the claim is a plausible one. If it is, then confirm it directly with the toll operator rather than via a link in a message.

Google says AI is making searching and browsing the web safer
Google says AI is making searching and browsing the web safer

CNN

time08-05-2025

  • CNN

Google says AI is making searching and browsing the web safer

Almost anyone who has used the internet has probably experienced that alarming moment when a window pops up claiming your device has a virus, encouraging you to click for tech support or download security software. It's a common online scam, and one that Google is aiming to fight more aggressively using artificial intelligence. Google says it's now using a version of its Gemini AI model that runs on users' devices to detect and warn users of these so-called 'tech support' scams. It's just one of a number of ways Google is using advancements in AI to better protect users from scams across Chrome, Search and its Android operating system, the company said in a blog post Thursday. The announcement comes as AI has enabled bad actors to more easily create large quantities of convincing, fake content — effectively lowering the barrier to carrying out scams that can be used to steal victims' money or personal information. Consumers worldwide lost more than $1 trillion to scams last year, according to the lobbying group Global Anti-Scam Alliance. So, Google and other organizations are increasingly using AI to fight scammers, too. Phiroze Parakh, senior director of engineering for Google Search, said that fighting scammers 'has always been an evolution game,' where bad actors learn and evolve as tech companies put new protections in place. 'Now, both sides have new tools,' Parakh said in an interview with CNN. 'So, there's this question of, how do you get to use this tool more effectively? Who is being a little more proactive about it?' Although Google has long used machine learning to protect its services, newer AI advancements have led to improved language understanding and pattern recognition, enabling the tech to identify scams faster and more effectively. Google said that on Chrome's 'enhanced protection' safe browsing mode on desktop, its on-device AI model can now effectively scan a webpage in real-time when a user clicks on it to look for potential threats. That matters because, sometimes, bad actors make their pages appear differently to Google's existing crawler tools for identifying scams than they do to users, a tactic called 'cloaking' that the company warned last year was on the rise. And because the model, called Gemini Nano, runs on your device, the service works faster and protects users' privacy, said Jasika Bawa, group product manager for Google Chrome. As with Chrome's existing safe browsing mode, if a user attempts to access a potentially unsafe site, they'll see a warning before being given the option to continue to the page. In another update, Google will warn Android users if they're receiving alerts from fishy sites in Chrome and let them automatically unsubscribe, so long as they have Chrome website notifications enabled. Google has also used AI to detect scammy results and prevent them from showing up in Search, regardless what kind of device users are on. Since Google Search first launched AI-powered versions of its anti-scam systems three years ago, it now blocks 20 times the number of problematic pages. 'We've seen this incredible advantage with our ability to understand language and nuance and relationships between entities that really made a change in how we detect these scammy actors,' he said, adding that in 2024 alone, the company removed hundreds of millions of scam search results daily because of the AI advancements. Parakh said, for example, that AI has made it better able to identify and remove a scam where bad actors create fake 'customer service' pages or phone numbers for airlines. Google says it has has now decreased scam attacks in airline-related searches by 80%. Google isn't the only company using AI to fight bad actors. British mobile phone company O2 said last year it was fighting phone scammers with 'Daisy,' a conversational AI chatbot meant to keep fraudsters on the phone, giving them less time to talk with would-be human victims. Microsoft has also piloted a tool that uses AI to analyze phone conversations to determine whether a call may be fraudulent and alert the user accordingly. And the US Treasury Department said last year that AI had helped it identify and recover $1 billion worth of check fraud in fiscal 2024 alone.

Google says AI is making searching and browsing the web safer
Google says AI is making searching and browsing the web safer

CNN

time08-05-2025

  • CNN

Google says AI is making searching and browsing the web safer

Almost anyone who has used the internet has probably experienced that alarming moment when a window pops up claiming your device has a virus, encouraging you to click for tech support or download security software. It's a common online scam, and one that Google is aiming to fight more aggressively using artificial intelligence. Google says it's now using a version of its Gemini AI model that runs on users' devices to detect and warn users of these so-called 'tech support' scams. It's just one of a number of ways Google is using advancements in AI to better protect users from scams across Chrome, Search and its Android operating system, the company said in a blog post Thursday. The announcement comes as AI has enabled bad actors to more easily create large quantities of convincing, fake content — effectively lowering the barrier to carrying out scams that can be used to steal victims' money or personal information. Consumers worldwide lost more than $1 trillion to scams last year, according to the lobbying group Global Anti-Scam Alliance. So, Google and other organizations are increasingly using AI to fight scammers, too. Phiroze Parakh, senior director of engineering for Google Search, said that fighting scammers 'has always been an evolution game,' where bad actors learn and evolve as tech companies put new protections in place. 'Now, both sides have new tools,' Parakh said in an interview with CNN. 'So, there's this question of, how do you get to use this tool more effectively? Who is being a little more proactive about it?' Although Google has long used machine learning to protect its services, newer AI advancements have led to improved language understanding and pattern recognition, enabling the tech to identify scams faster and more effectively. Google said that on Chrome's 'enhanced protection' safe browsing mode on desktop, its on-device AI model can now effectively scan a webpage in real-time when a user clicks on it to look for potential threats. That matters because, sometimes, bad actors make their pages appear differently to Google's existing crawler tools for identifying scams than they do to users, a tactic called 'cloaking' that the company warned last year was on the rise. And because the model, called Gemini Nano, runs on your device, the service works faster and protects users' privacy, said Jasika Bawa, group product manager for Google Chrome. As with Chrome's existing safe browsing mode, if a user attempts to access a potentially unsafe site, they'll see a warning before being given the option to continue to the page. In another update, Google will warn Android users if they're receiving alerts from fishy sites in Chrome and let them automatically unsubscribe, so long as they have Chrome website notifications enabled. Google has also used AI to detect scammy results and prevent them from showing up in Search, regardless what kind of device users are on. Since Google Search first launched AI-powered versions of its anti-scam systems three years ago, it now blocks 20 times the number of problematic pages. 'We've seen this incredible advantage with our ability to understand language and nuance and relationships between entities that really made a change in how we detect these scammy actors,' he said, adding that in 2024 alone, the company removed hundreds of millions of scam search results daily because of the AI advancements. Parakh said, for example, that AI has made it better able to identify and remove a scam where bad actors create fake 'customer service' pages or phone numbers for airlines. Google says it has has now decreased scam attacks in airline-related searches by 80%. Google isn't the only company using AI to fight bad actors. British mobile phone company O2 said last year it was fighting phone scammers with 'Daisy,' a conversational AI chatbot meant to keep fraudsters on the phone, giving them less time to talk with would-be human victims. Microsoft has also piloted a tool that uses AI to analyze phone conversations to determine whether a call may be fraudulent and alert the user accordingly. And the US Treasury Department said last year that AI had helped it identify and recover $1 billion worth of check fraud in fiscal 2024 alone.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store