Latest news with #Balasubramaniyan


Forbes
24-04-2025
- Business
- Forbes
This Fraud Detection Startup Made $100 Million Protecting Against Deepfake Calls
In 2024, Pindrop bolstered its offerings with a new product to use AI technology and determine if the caller is a machine or not. Photo by Matthias Balk/picture alliance via Getty Images) In January of last year, Atlanta-based startup Pindrop, a robocall and fraud-busting platform used mostly by call centers, had its 15 minutes of fame by defending the president. AI technology was being used to clone and impersonate former President Joe Biden's voice in New Hampshire, discouraging Democrats from voting. Pindrop was referenced across national media outlets as it accomplished what only a few in the space could: it identified the fraud at play and leveraged its massive collection of audio recordings to figure out what technology was used. Flash forward more than a year, and Pindrop has passed a new milestone in its more than 10 years of operations by reaching annual recurring revenue of more than $100 million. That growth is built on an increasingly lucrative offering in this new age of AI: Fighting deepfakes, or digitally created hoax recordings, images or videos, often used for nefarious reasons. 'Its growth reflects both the urgency of the challenge and the standout accuracy of its platform,' Martin Casado, a general partner at Andreessen Horowitz, a Pindrop investor, told Forbes. Pindrop offers three main products that combat fraud and identity theft. Its core products authenticate phone calls by verifying the caller's voice or if they're calling from a trusted device. In 2024, it bolstered its offerings with a new product to use AI technology and determine if the caller is a machine or not. Pindrop's services are already used at the call centers of eight of the ten largest banks, to screen calls, identifying suspicious speech patterns and outing fraudsters. And the company has been making inroads into health care and retail in recent years. Fighting voice impersonation hasn't always been a booming business. Pindrop entered the deepfake space in 2017 and quickly was noticed for identifying false voice clips from a documentary about chef Anthony Bourdain in 2018. These early detection abilities would evolve into its proprietary deepfake-identifying product. Before OpenAI released ChatGPT in 2023, the company saw one deepfake phone call across its entire customer base monthly. At that time, it would simply detect the call as fraudulent, and then staffers would analyze it. 'Twenty people at Pinrop would show up on a Zoom call and we'd all listen to the call,' CEO Vijay Balasubramaniyan said. But by the end of 2024, it was flagging an average of seven false calls per day, per customer. Balasubramaniyan says the company has seen a 111% increase in deepfake-related fraud attempts since 2022. 'What we have found is within the first four or five seconds of a deepfake call, they make so many mistakes and they make them in such unique ways that not only can we identify it's a deepfake, we know which engine made the mistake,' Balasubramaniyan says. Pindrop has been profitable since 2020, and completed a debt raise of $100 million last summer from Hercules Capital. It has raised more than $235 million in total venture capital and was last valued at $925 million with its $90 million raise in 2018. Balasubramaniyan says the 280 person company is not currently raising more money. 'After raising $200 million, man, you don't want more money, you have to be a real business,' he explained. Now, Pindrop is further building out this side of the business by continuing to invest in its deepfake detection abilities and aiming to integrate with job application processes and video meetings to detect more than just audio deepfakes. John Chambers, an angel investor and board member at Pindrop, who worked for decades as CEO at Cisco, says the total deepfake market is potentially worth tens of billions of dollars. "Customers will pay an unbelievable amount of money to know that they're not on a deepfake supply chain [or job applicant] call,' Chambers said, adding, 'It's a $60 billion market.' And Pindrop has positioned itself well in this goal. From its decade of fraud fighting, it has a database of 5 billion call recordings with 20 million deepfake audio files. The data set has allowed the company to sharpen its competitive edge against startups like identity verification provider Prove or deepfake protection platforms like Reality Defender and GetReal. In AI, that means keeping current with a fast-evolving technology and ever more realistic fakes. But Balasubramaniyan is confident in the company's AI detection abilities because audio data is so dynamic. The voice contains thousands of moments every second at which it can be examined, he says. For every advantage Pindrop scores with its tech, a bad actor will surely find a new attack, Balasubramaniyan says. Pindrop's advantage is that it's still more expensive for fraudsters to generate a deepfake than to detect one. 'Deepfakes are going to get better, but we're not going to be sitting around twiddling our thumbs,' Balasubramaniyan says.
Yahoo
20-04-2025
- Business
- Yahoo
Sophisticated con artists aren't just using AI and deepfakes to steal money, they're stealing jobs as well
As if the job market wasn't tough enough, now job seekers must compete with con artists using stolen identities, AI and deepfake techniques to get hired. Even technology companies can fall for the scams. Pindrop Security, a company that helps detect fraud in voice interactions, has encountered such situations firsthand. The company shortlisted a candidate named 'Ivan' for a senior engineering position and set up a video interview. But as the CEO Vijay Balasubramaniyan shared with CNBC, something felt off during the video interview. The candidate's facial expressions didn't quite match his voice. Turned out Balasubramaniyan's gut feeling was right. The person on screen was using deepfake technology to conceal his own identity by using someone else's face. The Pindrop team caught it. But not everyone that's hiring a worker remotely has the same expertise or technology to root out fraud. 'We are no longer able to trust our eyes and ears,' Balasubramaniyan said. 'Without technology, you're worse off than a monkey with a random coin toss.' Fraudulent candidates now use a range of generative AI tools to scam their way through nearly every part of the hiring process. AI can fabricate photo IDs, generate polished LinkedIn profiles and even simulate real-time answers in video interviews. Some scammers use remote desktops to route their traffic through the U.S., making it appear as if they're logging in from within the country. These scammers aren't just stealing jobs. Once hired, imposters can install malware, steal sensitive customer data or divert money to foreign adversaries — like North Korea. 'Every time we list a job posting, we get 100 North Korean spies applying to it,' said Lili Infante, CEO of CAT Labs, a Florida-based startup related to cybersecurity and cryptocurrency. I'm 49 years old and have nothing saved for retirement — what should I do? Don't panic. Here are 5 of the easiest ways you can catch up (and fast) Nervous about the stock market in 2025? Find out how you can access this $1B private real estate fund (with as little as $10) Thanks to Jeff Bezos, you can now become a landlord for as little as $100 — and no, you don't have to deal with tenants or fix freezers. Here's how The U.S. Justice Department alleges that 300 American companies hired people linked to the North Korean regime. The department says the workers stole American identities to get remote IT jobs and then funneled million to North Korea. The rise in fake applicants doesn't just pose a cybersecurity threat. It could slow down hiring across the board. The cost of Legitimate candidates face longer wait times as companies are forced to double-check résumés, verify identities, and flag suspicious activity. These delays drive up costs. Some companies are turning to third-party verification services like Jumio, Socure, and iDenfy. Others, like Pindrop, are developing in-house video authentication tools. But as deepfake and identity-masking tools improve, experts warn the problem may get worse before it gets better. The research and consultancy firm Gartner predicts that by 2028, one in four job candidates globally will be fake. 'Folks think they're not experiencing it,' said Ben Sesser, CEO of BrightHire, an HR tech company, 'but I think it's probably more likely that they're just not realizing that it's going on.' Read more: The US stock market's 'fear gauge' has exploded — but this 1 'shockproof' asset is up 14% and helping American retirees stay calm. Here's how to own it ASAP This article provides information only and should not be construed as advice. It is provided without warranty of any kind.
Yahoo
11-04-2025
- Business
- Yahoo
Job applicants are using deepfake AI to trick recruiters—Here's how hiring managers can spot the next imposter
Vijay Balasubramaniyan knew there was a problem. The CEO of Pindrop, a 300-person information security company, says his hiring team came to him with a strange dilemma: they were hearing weird noises and tonal abnormalities while conducting remote interviews with job candidates. Balasubramaniyan immediately thought the issue might be interviewees using deepfake AI technology to mask their true identities. But unlike most other companies, Pindrop was in a unique position as a fraud-detecting organization to investigate the mystery itself. To get to the bottom of it, the company posted a job listing for a senior back-end developer. It then used its own in-house technology to scan candidates for potential red flags. 'We started building these detection capabilities, not just for phone calls, but for conferencing systems like Zoom and Teams,' he tells Fortune. 'Since we do threat detection, we wanted to eat our own dog food, so to speak. And very quickly we saw the first deepfake candidate.' Out of 827 total applications for the developer position, the team found that roughly 100, or about 12.5%, did so using fake identities. 'It blew our mind,' says Balasubramaniyan. 'This was never the case before, and tells you how in a remote-first world, this is increasingly becoming a problem.' Pindrop isn't the only company getting a deluge of job applications attached to fake identities. Although it's still a nascent issue, around 17% of hiring managers have already encountered candidates using deepfake technology to alter their video interviews, according to a March survey from career platform Resume Genius. And one startup founder recently told Fortune that about 95% of the résumés he receives are from North Korean engineers pretending to be American. As AI technology continues to progress at a rapid clip, businesses and HR leaders must prepare for this new twist to an already-complicated recruiting landscape, and be prepared to face the next deepfake AI candidate who shows up for an interview. 'My theory right now is that if we're getting hit with it, everybody's getting hit with it,' says Balasubramaniyan. Some AI deepfake job applicants are simply attempting to land multiple jobs at once to boost their income. But there is evidence to suggest that there are more nefarious forces at play that can lead to big consequences for unwitting employers. In 2024, cybersecurity company Crowsdtrike responded to more than 300 instances of criminal activity related to Famous Chollima, a major North Korean organized crime group. More than 40% of those incidents were sourced to IT workers who had been hired under a false identity. 'Much of the revenue they're generating from these fake jobs is going directly to a weapons program in North Korea,' says Adam Meyers, a senior vice president of counter adversary operations at Crowdstrike. 'They're targeting login, credit card information, and company data.' And in December 2024, 14 North Korean nationals were indicted on charges related to a fraudulent IT worker. They stand accused of funnelling at least $88 million from businesses into a weapons program over the course of six years. The Department of Justice also alleges that some of those workers also threatened to leak sensitive company information unless their employer paid them an extortion fee. Dawid Moczadło, the co-founder of data security software company Vidoc Security Lab, recently posted a video on LinkedIn of an interview he did with a deepfake AI job candidate, which serves as a masterclass in potential red flags. The audio and video of the Zoom call didn't quite sync up, and the video quality also seemed off to him. 'When the person was moving and speaking I could see different shading on his skin and it looked very glitchy, very strange,' Moczadło tells Fortune. Most damning of all though, when Moczadło asked the candidate to hold his hand in front of his face, he refused. Moczadło suspects that the filter used to create a false image would begin to fray if that happened, much like it does on Snapchat, exposing his true face. 'Before this happened we just gave people the benefit of the doubt, that maybe their camera is broken,' says Moczadło. 'But after this, if they don't have their real camera on, we will just completely stop [the interview].' It's a strange new world out there for HR leaders and hiring managers, but there are other tell-tale signs they can watch out for earlier on in the interview process that can save them major headaches later on. Deepfake candidates often use AI to create fake LinkedIn profiles that appear real, but are missing critical information in their employment history, or have very little activity or few connections, Meyers notes. When it comes to the interview stage, these candidates are also often unable to answer basic questions about their life and job experience. For example, Moczadło says he recently interviewed a deepfake candidate who listed multiple well-known organizations on their resume, but couldn't share any detailed information about those companies. Employers should also look out for new hires who ask to have their laptop shipped to a location other than their home address. Some people are operating 'laptop farms,' in which they keep multiple computers open and running so that people outside the country can log in remotely. And finally, employee impersonators are typically not the best workers. They often don't turn on their cameras during meetings, make excuses to hide their faces, or skip work gatherings altogether. Moczadło says he's much more careful about hiring now, and has implemented new procedures into the process. For example, he pays for candidates to come into the company's office for at least one full day in-person before they're hired. But he knows not everyone can afford to be so vigilant. 'We're in this environment where recruiters are getting thousands of applications,' says Moczadło. 'And when there's more pressure on them to hire people they're more likely to overlook these early warning signs and create this perfect storm of opportunity to take advantage of.' This story was originally featured on
Yahoo
09-04-2025
- Business
- Yahoo
Fake job seekers using AI reportedly flooding job market
(KRON) — Fake job seekers are utilizing artificial intelligence to try and fool recruiters conducting remote interviews, according to a report in CNBC. Using AI tools, 'job seekers who aren't who they say they are' are able to fabricate photo IDs, generate employment histories and answer interview questions, the report says. Pindrop Security, an Atlanta-based voice authentication startup, recently posted a job for a coder. One applicant, according to the report, stood out from others. The candidate, a Russian coder named Ivan, seemed to have all the right qualifications for the role. However, when interviewed over video, the Pindrop recruiter noticed that Ivan's facial expressions were out of sync with his words. The candidate, according to Pindrop CEO Vijay Balasubramaniyan, was a scammer using deepfake software and generative AI in a bid to get hired. No, a massive Tesla showroom is not opening in Concord 'Gen AI has blurred the line between what it is to be human and what it means to be machine,' Balasubramaniyan told CNBC. 'What we're seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.' Why, you might ask, would someone want to use AI to get hired for a job they aren't really qualified for? According to CNBC, once hired, imposters can install malware or demand ransom from a company, or steal its customer data, trade secrets, or funds. In some cases, deceitful employees use the AI tools to get hired and collect a salary they wouldn't otherwise be able to, according to Balasubramaniyan. In another incident, posted to LinkedIn, a recruiter discovered that a candidate was using deepfake technology to appear as someone they weren't in the midst of an interview. The recruiter abruptly ended the video call. According to research and advisory firm Gartner, one in four job candidates will be fake by 2028. Cybersecurity and cryptocurrency firms in particular, have seen a surge in AI-powered fake job seekers, industry experts told CNBS. Companies that hire for remote roles are particularly vulnerable to being targeted by these bad actors. 'Every time we list a job posting, we get 100 North Korean spies applying to it,' said Lili Infante, founder of CAT Labs, a startup that works in cybersecurity and cryptocurrency. 'When you look at their resumes, they look amazing; they use all the keywords for what we're looking for.' One of the easiest ways to weed out these sorts of bad actors is to hold job interviews in-person, even for remote positions. 'I'd bet that in-person interviews make a comeback in 2025, even for remote companies,' said Amit Matani, CEO of recruitment firm Wellfound, in a post on LinkedIn. Pindrop, according to CNBC, also used a video authentication program to confirm that 'Ivan' was a fake. Some deepfake candidates that actually get hired do a bad job. But ironically, others actually do the job so well, that employers are sorry to let them go when they're found out, according to Roger Grimes, a computer security consultant who spoke to CNBC. The rise of artificial intelligence has led to numerous concerns about workers being replaced by AI. Now it seems like candidates may even have to compete with the technology on the job market. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


NBC News
08-04-2025
- Business
- NBC News
Fake job seekers are flooding U.S. companies that are hiring for remote positions, tech CEOs say
When voice authentication startup Pindrop Security posted a recent job opening, one candidate stood out from hundreds of others. The applicant, a Russian coder named Ivan, seemed to have all the right qualifications for the senior engineering role. When he was interviewed over video last month, however, Pindrop's recruiter noticed that Ivan's facial expressions were slightly out of sync with his words. That's because the candidate, whom the firm has since dubbed 'Ivan X,' was a scammer using deepfake software and other generative AI tools in a bid to get hired by the tech company, said Pindrop CEO and co-founder Vijay Balasubramaniyan. 'Gen AI has blurred the line between what it is to be human and what it means to be machine,' Balasubramaniyan said. 'What we're seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.' Companies have long fought off attacks from hackers hoping to exploit vulnerabilities in their software, employees or vendors. Now, another threat has emerged: Job candidates who aren't who they say they are, wielding AI tools to fabricate photo IDs, generate employment histories and provide answers during interviews. The rise of AI-generated profiles means that by 2028 globally 1 in 4 job candidates will be fake, according to research and advisory firm Gartner. The risk to a company from bringing on a fake job seeker can vary, depending on the person's intentions. Once hired, the impostor can install malware to demand ransom from a company, or steal its customer data, trade secrets or funds, according to Balasubramaniyan. In many cases, the deceitful employees are simply collecting a salary that they wouldn't otherwise be able to, he said. 'Massive' increase Cybersecurity and cryptocurrency firms have seen a recent surge in fake job seekers, industry experts told CNBC. As the companies are often hiring for remote roles, they present valuable targets for bad actors, these people said. Ben Sesser, the CEO of BrightHire, said he first heard of the issue a year ago and that the number of fraudulent job candidates has 'ramped up massively' this year. His company helps more than 300 corporate clients in finance, tech and health care assess prospective employees in video interviews. 'Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,' Sesser said. 'It's become a weak point that folks are trying to expose.' But the issue isn't confined to the tech industry. More than 300 U.S. firms inadvertently hired impostors with ties to North Korea for IT work, including a major national television network, a defense manufacturer, an automaker, and other Fortune 500 companies, the Justice Department alleged in May. The workers used stolen American identities to apply for remote jobs and deployed remote networks and other techniques to mask their true locations, the DOJ said. They ultimately sent millions of dollars in wages to North Korea to help fund the nation's weapons program, the Justice Department alleged. That case, involving a ring of alleged enablers including an American citizen, exposed a small part of what U.S. authorities have said is a sprawling overseas network of thousands of IT workers with North Korean ties. The DOJ has since filed more cases involving North Korean IT workers. A growth industry Fake job seekers aren't letting up, if the experience of Lili Infante, founder and chief executive of CAT Labs, is any indication. Her Florida-based startup sits at the intersection of cybersecurity and cryptocurrency, making it especially alluring to bad actors. 'Every time we list a job posting, we get 100 North Korean spies applying to it,' Infante said. 'When you look at their resumes, they look amazing; they use all the keywords for what we're looking for.' Infante said her firm leans on an identity-verification company to weed out fake candidates, part of an emerging sector that includes firms such as iDenfy, Jumio and Socure. The fake employee industry has broadened beyond North Koreans in recent years to include criminal groups located in Russia, China, Malaysia and South Korea, according to Roger Grimes, a veteran computer security consultant. Ironically, some of these fraudulent workers would be considered top performers at most companies, he said. 'Sometimes they'll do the role poorly, and then sometimes they perform it so well that I've actually had a few people tell me they were sorry they had to let them go,' Grimes said. His employer, the cybersecurity firm KnowBe4, said in October that it inadvertently hired a North Korean software engineer. The worker used AI to alter a stock photo, combined with a valid but stolen U.S. identity, and got through background checks, including four video interviews, the firm said. He was only discovered after the company found suspicious activity coming from his account. Fighting deepfakes Despite the DOJ case and a few other publicized incidents, hiring managers at most companies are generally unaware of the risks of fake job candidates, according to BrightHire's Sesser. 'They're responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,' he said. 'Folks think they're not experiencing it, but I think it's probably more likely that they're just not realizing that it's going on.' As the quality of deepfake technology improves, the issue will be harder to avoid, Sesser said. As for 'Ivan X,' Pindrop's Balasubramaniyan said the startup used a new video authentication program it created to confirm he was a deepfake fraud. While Ivan claimed to be located in western Ukraine, his IP address indicated he was actually from thousands of miles to the east, in a possible Russian military facility near the North Korean border, the company said. Pindrop, backed by Andreessen Horowitz and Citi Ventures, was founded more than a decade ago to detect fraud in voice interactions, but may soon pivot to video authentication. Clients include some of the biggest U.S. banks, insurers and health companies.