logo
#

Latest news with #WindowsQuickAssist

Microsoft reveals how AI tools have made e-commerce fraud, job scams and tech support frauds more dangerous
Microsoft reveals how AI tools have made e-commerce fraud, job scams and tech support frauds more dangerous

Time of India

time21-04-2025

  • Business
  • Time of India

Microsoft reveals how AI tools have made e-commerce fraud, job scams and tech support frauds more dangerous

Microsoft, in its latest Cyber Signals report, says that artificial intelligence has significantly lowered barriers for cybercriminals, enabling more sophisticated and convincing fraud schemes. Between April 2024 and April 2025, Microsoft thwarted $4 billion in fraud attempts, rejected 49,000 fraudulent partnership enrollments, and blocked approximately 1.6 million bot signup attempts per hour. E-commerce fraud: AI creates convincing fake storefronts in minutes AI tools now allow fraudsters to create convincing e-commerce websites in minutes rather than days or weeks. These sites feature AI-generated product descriptions, images, and fake customer reviews that mimic legitimate businesses. AI-powered customer service chatbots add another layer of deception, interacting with customers and stalling complaints with scripted excuses to delay chargebacks. Microsoft reports that much of this AI-powered fraud originates from China and Germany, with the latter being targeted due to its status as one of the largest e-commerce markets in the European Union. To combat these threats, Microsoft has implemented fraud detection systems across its products, including Microsoft Defender for Cloud and Microsoft Edge, which features website typo protection and domain impersonation detection using deep learning technology. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Sunteck Beach Residences (SBR) Mumbai Sunteck Book Now Undo Job scams: AI powers fake interviews and employment offers Employment fraud has evolved with generative AI enabling scammers to create fake job listings, stolen credentials, and AI-powered email campaigns targeting job seekers. These scams often appear legitimate through AI-powered interviews and automated correspondence, making it increasingly difficult to identify fraudulent offers. Warning signs include unsolicited job offers promising high pay for minimal qualifications, requests for personal information including bank details, and offers that seem too good to be true. Microsoft advises job seekers to verify employer legitimacy by cross-checking company details on official websites and platforms like LinkedIn , and to be wary of emails from free domains rather than official company email addresses. Tech support fraud: AI enhances social engineering attacks While some tech support scams don't yet leverage AI, Microsoft has observed financially motivated groups like Storm-1811 impersonating IT support through voice phishing to gain access to victims' devices through legitimate tools like Windows Quick Assist. AI tools can expedite the collection and organization of information about targeted victims to create more credible social engineering lures. In response, Microsoft blocks an average of 4,415 suspicious Quick Assist connection attempts daily—approximately 5.46% of global connection attempts. The company has implemented warning messages in Quick Assist to alert users about possible scams before they grant access to their devices and developed a Digital Fingerprinting capability that leverages AI and machine learning to detect and prevent fraud. Microsoft is taking a proactive approach to fraud prevention through its Secure Future Initiative. In January 2025, the company introduced a new policy requiring product teams to perform fraud prevention assessments and implement fraud controls as part of their design process. Microsoft has also joined the Global Anti-Scam Alliance to collaborate with governments, law enforcement, and other organizations to protect consumers from scams.

Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge
Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge

BusinessToday

time21-04-2025

  • Business
  • BusinessToday

Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge

Microsoft said it blocked nearly US$4 billion in fraud attempts between April 2024 and April 2025, highlighting the scale and sophistication of cybercrime threats amid a global rise in AI-powered scams. According to the latest Cyber Signals report, Microsoft rejected 49,000 fraudulent partner enrolments and prevented approximately 1.6 million bot sign-up attempts per hour, as AI tools continue to lower the barrier for cybercriminals. Generative AI tools are now used to craft convincing fake websites, job scams, and phishing campaigns with deepfakes and cloned voices. Microsoft observed a growing trend of AI-assisted scams originating from regions like China and Germany, where digital marketplaces are most active. Threat actors are now able to build fraudulent e-commerce websites and customer service bots in minutes, leveraging AI-generated content to mislead consumers into trusting fake storefronts and reviews. These deceptive practices have become increasingly difficult to detect. Microsoft's multi-layered response includes domain impersonation protection, scareware blockers, typo protection, and fake job detection systems across Microsoft Edge, LinkedIn, and other platforms. Windows Quick Assist has also been enhanced with in-product warnings and fraud detection. The tool now blocks over 4,400 suspicious connection attempts daily, thanks to Digital Fingerprinting and AI-driven risk signals. Scammers continue to exploit job seekers by generating fake listings, AI-written interviews, and phishing campaigns. Microsoft recommends job platforms enforce multifactor authentication and monitor deepfake-generated interviews to mitigate risks. Meanwhile, groups like Storm-1811 have impersonated IT support via Windows Quick Assist, gaining unauthorised device access without using AI. Microsoft has since strengthened safeguards and suspended accounts linked to such abuse. As part of its Secure Future Initiative, Microsoft introduced a new policy in January 2025 requiring all product teams to perform fraud risk assessments during the design phase. The goal is to embed security measures directly into the architecture of products and services. Corporate Vice-President of Anti-Fraud and Product Abuse, Kelly Bissell, said Microsoft's defence strategy relies not only on technology but also public education and industry collaboration. Microsoft is working closely with global enforcement agencies through the Global Anti-Scam Alliance (GASA) to dismantle criminal infrastructures. 'Cybercrime is a trillion-dollar problem. AI gives us the ability to respond faster, but it also requires all of us—tech firms, regulators, and users—to work together,' said Bissell. To stay protected, consumers are advised to: Verify job listings and company legitimacy. Avoid unsolicited offers via text or personal emails. Be wary of websites offering 'too good to be true' deals. Use browsers with fraud protection and never share personal or financial information with unverified sources. Related

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store