logo
#

Latest news with #DigitalFingerprinting

Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge
Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge

BusinessToday

time21-04-2025

  • Business
  • BusinessToday

Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge

Microsoft said it blocked nearly US$4 billion in fraud attempts between April 2024 and April 2025, highlighting the scale and sophistication of cybercrime threats amid a global rise in AI-powered scams. According to the latest Cyber Signals report, Microsoft rejected 49,000 fraudulent partner enrolments and prevented approximately 1.6 million bot sign-up attempts per hour, as AI tools continue to lower the barrier for cybercriminals. Generative AI tools are now used to craft convincing fake websites, job scams, and phishing campaigns with deepfakes and cloned voices. Microsoft observed a growing trend of AI-assisted scams originating from regions like China and Germany, where digital marketplaces are most active. Threat actors are now able to build fraudulent e-commerce websites and customer service bots in minutes, leveraging AI-generated content to mislead consumers into trusting fake storefronts and reviews. These deceptive practices have become increasingly difficult to detect. Microsoft's multi-layered response includes domain impersonation protection, scareware blockers, typo protection, and fake job detection systems across Microsoft Edge, LinkedIn, and other platforms. Windows Quick Assist has also been enhanced with in-product warnings and fraud detection. The tool now blocks over 4,400 suspicious connection attempts daily, thanks to Digital Fingerprinting and AI-driven risk signals. Scammers continue to exploit job seekers by generating fake listings, AI-written interviews, and phishing campaigns. Microsoft recommends job platforms enforce multifactor authentication and monitor deepfake-generated interviews to mitigate risks. Meanwhile, groups like Storm-1811 have impersonated IT support via Windows Quick Assist, gaining unauthorised device access without using AI. Microsoft has since strengthened safeguards and suspended accounts linked to such abuse. As part of its Secure Future Initiative, Microsoft introduced a new policy in January 2025 requiring all product teams to perform fraud risk assessments during the design phase. The goal is to embed security measures directly into the architecture of products and services. Corporate Vice-President of Anti-Fraud and Product Abuse, Kelly Bissell, said Microsoft's defence strategy relies not only on technology but also public education and industry collaboration. Microsoft is working closely with global enforcement agencies through the Global Anti-Scam Alliance (GASA) to dismantle criminal infrastructures. 'Cybercrime is a trillion-dollar problem. AI gives us the ability to respond faster, but it also requires all of us—tech firms, regulators, and users—to work together,' said Bissell. To stay protected, consumers are advised to: Verify job listings and company legitimacy. Avoid unsolicited offers via text or personal emails. Be wary of websites offering 'too good to be true' deals. Use browsers with fraud protection and never share personal or financial information with unverified sources. Related

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store