logo
#

Latest news with #SentiLink

Social Security number leaked? Chances are, a criminal is already trying to use it
Social Security number leaked? Chances are, a criminal is already trying to use it

NBC News

time6 days ago

  • NBC News

Social Security number leaked? Chances are, a criminal is already trying to use it

Data breaches that expose the personal information of millions of people can have very real consequences for individuals, most notably: identity theft. A new study found that the more of a person's personal information is easily accessible to criminals on the dark web, the likelier it is that that person will be a target for criminals. The odds skyrocket if the information includes the person's Social Security number, which is usually necessary for financial applications like opening a bank account, applying for a credit card or filing tax returns, all of which are common targets for fraudsters. The findings, published Monday by SentiLink, a company that monitors customer data for fraud on behalf of finance companies, are intuitive. But they are believed to be the first of their kind showing a clear correlation between the robust cybercriminal trade of people's identities and real-world attempts where criminals try to commit fraud against those victims. David Maimon, SentiLink's head of fraud insight, told NBC News he conducted the study by comparing three sets of data from people whose information is available online. For a baseline list of people whose names and addresses were available online, he used publicly available voter registration records. For a set of people whose names and addresses — but not Social Security numbers — were widely traded among criminals, he took names off stolen checks from a ring of check fraud thieves that operated on Telegram. For a list of people whose names, addresses and Social Security numbers were all widely traded among cybercriminals, he downloaded a database of around 100,000 victims, cobbled together from various hacks and repeatedly traded on the dark web since 2021. Maimon then compared more than 2,000 people from those datasets with SentiLink's internal records of attempted identity theft to see how often each of those people had been targeted. The results were dramatic. Only 2.1% of the people from voter registration forms had been targeted by identity thieves. Of those whose names were found in the stolen check ring, 12.1% had been targeted. But nearly all people in the 2021 database with Social Security numbers — 97% — had been victims of attempted identity theft, Maimon found. Data breaches have become increasingly common, to the point where most Americans' information has been repeatedly stolen. The U.S. Federal Trade Commission received 1.1 million claims of identity theft in 2024, though that is believed to be a severe undercount of the complete number of victims. Even children often have their Social Security number stolen, and credit monitoring services rarely help victims much. According to statistics provided to NBC News by the nonprofit Identity Theft Resource Center, there were 1,857 new data breaches in 2024 that included Americans' Social Security numbers. While Social Security numbers are routinely hacked, it's often impossible for a victim to know how widely their information has been shared — a key component of SentiLink's findings. Not all breaches are equal, and cybercriminals often sell hacked data only to the highest bidder to keep it more exclusive. When a person's information does become widely repackaged and repeatedly traded between criminals, they are targeted by thieves repeatedly for a longer period of time, Maimon found. The best course of action, Maimon said, is for people to freeze their credit ratings with the three major credit agencies and to monitor their ChexSystems Consumer Score to see if anyone has opened bank accounts in their name.

Deepfake Scams Are Distorting Reality Itself
Deepfake Scams Are Distorting Reality Itself

WIRED

time04-06-2025

  • Business
  • WIRED

Deepfake Scams Are Distorting Reality Itself

Jun 4, 2025 6:00 AM The easy access that scammers have to sophisticated AI tools means everything from emails to video calls can't be trusted. Imagine you meet someone new. Be it on a dating app or social media, you chance across each other online and get to talking. They're genuine and relatable, so you quickly take it out of the DMs to a platform like Telegram or WhatsApp. You exchange photos and even video call each over. You start to get comfortable. Then, suddenly, they bring up money. They need you to cover the cost of their Wi-Fi access, maybe. Or they're trying out this new cryptocurrency. You should really get in on it early! And then, only after it's too late, you realize that the person you were talking to was in fact not real at all. They were a real-time AI-generated deepfake hiding the face of someone running a scam. This scenario might sound too dystopian or science-fictional to be true, but it has happened to countless people already. With the spike in the capabilities of generative AI over the past few years, scammers can now create realistic fake faces and voices to mask their own in real time. And experts warn that those deepfakes can supercharge a dizzying variety of online scams, from romance to employment to tax fraud. David Maimon, the head of fraud insights at identity verification firm SentiLink and a professor of criminology at Georgia State University, has been tracking the evolution of AI romance scams and other kinds of AI fraud for the past six years. 'We're seeing a dramatic increase in the volume of deepfakes, especially in comparison to 2023 and 2024,' Maimon says. 'It wasn't a whole lot. We're talking about maybe four or five a month,' he says. 'Now, we're seeing hundreds of these on a monthly basis across the board, which is mind-boggling.' Deepfakes are already being used in a variety of online scams. One finance worker in Hong Kong, for example, paid $25 million to a scammer posing as the company's chief financial officer in a deepfaked video call. Some deepfake scammers have even posted instructional videos on YouTube, which have a disclaimer as being for 'pranks and educational purposes only.' Those videos usually open with a romance scam call, where an AI-generated handsome young man is talking to an older woman. More traditional deepfakes—such as a pre-rendered video of a celebrity or politician, rather than a live fake—have also become more prevalent. Last year, a retiree in New Zealand lost around $133,000 to a cryptocurrency investment scam after seeing a Facebook advertisement featuring a deepfake of the country's prime minister encouraging people to buy in. Maimon says SentiLink has started to see deepfakes used to create bank accounts in order to lease an apartment or engage in tax refund fraud. He says an increasing number of companies have also seen deepfakes in video job interviews. ' Anything that requires folks to be online and which supports the opportunity of swapping faces with someone—that will be available and open for fraud to take advantage of,' Maimon says. Part of the reason for this increase is that the barriers for creating deepfakes are getting lower. There are a lot of easily accessible AI tools that can generate realistic faces and a lot of tools that can animate those faces or create full-length videos out of them. Scammers often use images and videos of real people, deepfaked to slightly change their faces or alter what they're saying, to target their loved ones or hijack their public influence. Matt Groh, a professor of management at Northwestern University who researches people's ability to detect deepfakes, says that point-and-click generative AI tools make it much easier to make small, believable changes to already-existing media. 'If there's an image of you on the internet, that would be enough to manipulate a face to look like it's saying something that you haven't said before or doing something you haven't done before,' Groh says. It's not just fake video that you need to be worried about. With a few clips of audio, it's also possible to make a believable copy of somebody's voice. One study in 2023 found that humans failed to detect deepfake audio over a quarter of the time. ' Just a single image and five seconds of audio online mean that it's definitely possible for a scammer to make some kind of realistic deepfake of you,' Groh says. Deepfakes are becoming more pervasive in contexts other than outright scams. Social media has been flooded over the past year with AI-generated 'influencers' stealing content from adult creators by deepfaking new faces onto their bodies and monetizing the resulting videos. Deepfakes have even bled over into geopolitics, like when the mayors of multiple European capital cities held video calls with a fake version of the mayor of Kyiv, Ukraine. People have started using deepfakes for personal reasons, like bringing back a dead relative or creating an avatar of a victim to testify in court. So, if deepfakes are everywhere, how do you spot one? The answer is not technology. A number of technology companies, including OpenAI, have launched deepfake detection tools. Researchers have also proposed mechanisms to detect deepfakes based on things like light reflected in a person's eyes or inconsistent facial movements, and have started investigating how to implement them in real time. But those models often cannot reliably detect different kinds of AI fakes. OpenAI's model, for example, is specifically designed only to report content generated with the company's own Dall-E 3 tool but not other image generation models. There's also the risk that scammers can abuse AI detectors by repeatedly tweaking their content until it fools the software. ' The major thing we have to understand is that the technology we have right now is not good enough to detect those deepfakes,' Maimon says. ' We're still very much behind.' For now, as video deepfakes get more popular, the best way to detect one relies on humans. Studies on deepfake detection show that people are best at distinguishing whether videos are real or fake, as opposed to just audio or text content, and are in some cases even better than leading detection models. Groh's team conducted one study which found that taking more time to determine whether an image was real or fake led to a significant increase in accuracy, by up to eight percentage points for just 10 seconds of viewing time. ' This sounds almost so simple,' Groh says. 'But if you spend just a couple extra seconds, that leads to way higher rates of being able to distinguish an image as real or fake. One of the ways for any regular person to just be a little bit less susceptible to a scam is to ask, 'Does this look actually real?' And if you just do that for a few extra seconds, we're all going to be a little bit better off.' Deepfakes' popularity could be a double-edged sword for scammers, Groh says. The more widespread they are, the more people will be familiar with them and know what to look for. That familiarity has paid off in some cases. Last summer, a Ferrari executive received a call from someone claiming to be the CEO of the company. The person convincingly emulated the CEO's voice but abruptly hung up the call when the executive tried to verify their identity by asking what book the CEO had recommended just days earlier. The CEO of WPP, the world's biggest advertising agency, was also unsuccessfully targeted by a similar deepfake scam. 'I think there's a balancing act going on,' Groh says. ' We definitely have technology today that is generally hard for people to identify. But at the same time, once you know that there's a point-and-click tool that allows you to transform one element into something else, everyone becomes a lot more skeptical.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store