logo
#

Latest news with #Pindrop

The Prompt: Deepfake Detection Is A Booming Business
The Prompt: Deepfake Detection Is A Booming Business

Forbes

time29-04-2025

  • Business
  • Forbes

The Prompt: Deepfake Detection Is A Booming Business

Welcome back to The Prompt, Facebook, Instagram, WhatsApp, Threads. Now Meta is launching yet another app— this time focused on artificial intelligence. Meta AI is the social media giant's answer to OpenAI's ChatGPT. The standalone app is built on the company's latest model, Llama 4, and allows users to spin up images and search for information. The app can be connected to user's Meta accounts for a more personalized experience, as well. There's also a voice mode to have conversations with the AI, but it doesn't have real-time access to the internet. Now let's get into the headlines. The U.S. House of Representatives has passed the Take It Down Act, which makes it illegal to distribute nonconsensual pornographic images (including those generated with the help of AI) and requires social media platforms to remove such images within 48 hours of being reported. The bipartisan bill, endorsed by First Lady Melania Trump, comes as nonconsensual sexually explicit deepfakes rampantly spread across platforms like Reddit, Ebay and Etsy and many more after a surge in popularity of AI tools. Language learning app company Duolingo plans to stop paying contractors for the work that can be done by AI, its billionaire CEO Luis Von Ahn said in an all-hands email to employees. It also plans to make AI use a deciding factor of performance reviews and hiring and only allocate human headcount to jobs that can't be automated. The company, which is building an AI tutor to help people learn new languages, has added a slew of AI abilities within its app from an interactive game to a video calling AI 'friend.' Shopify CEO Tobi Lutke recently shared a similar note with his employees regarding AI use. Meta's AI companions, often modeled after popular celebrities and different characters, can engage in sexually explicit and romantic role play conversations with underage users as well as adults, according to multiple tests conducted by The Wall Street Journal. Senior leaders at the social media behemoth were reportedly aware of the chatbots' tendencies to foray into risque and explicit discussions and multiple staffers flagged their concerns internally. In 2024, Pindrop bolstered its offerings with a new product to use AI technology and determine if the caller is a machine or not. In January of last year, Atlanta-based startup Pindrop, a robocall and fraud-busting platform used mostly by call centers, had its 15 minutes of fame by defending the president. AI technology was being used to clone and impersonate former President Joe Biden's voice in New Hampshire, discouraging Democrats from voting. Pindrop was referenced across national media outlets as it accomplished what only a few in the space could: it identified the fraud at play and leveraged its massive collection of audio recordings to figure out what technology was used. Flash forward more than a year, and Pindrop has passed a new milestone in its more than 10 years of operations by reaching annual recurring revenue (signed contracts) worth more than $100 million. That growth is built on an increasingly lucrative offering in this new age of AI: Fighting deepfakes, or digitally created hoax recordings, images or videos, often used for nefarious reasons. 'Its growth reflects both the urgency of the challenge and the standout accuracy of its platform,' Martin Casado, a general partner at Andreessen Horowitz, a Pindrop investor, told Forbes. Pindrop offers three main products that combat fraud and identity theft. Its core products authenticate phone calls by verifying the caller's voice or if they're calling from a trusted device. In 2024, it bolstered its offerings with a new product to use AI technology and determine if the caller is a machine or not. Pindrop's services are already used at the call centers of eight of the ten largest banks, to screen calls, identifying suspicious speech patterns and outing fraudsters. And the company has been making inroads into health care and retail in recent years. Fighting voice impersonation hasn't always been a booming business. Pindrop entered the deepfake space in 2017 and quickly was noticed for identifying false voice clips from a documentary about chef Anthony Bourdain in 2018. These early detection abilities would evolve into its proprietary deepfake-identifying product. Read the full story on Forbes. OpenAI has added new shopping-related features to ChatGPT that allow people to search for products, compare them based on reviews and get visual details about them. The search results direct people to the retailer's site where the transaction can be completed. OpenAI said the chatbot's answers are not ads and are determined independently Researchers from the University of Zurich secretly conducted an experiment on users on Change My View, a sub-Reddit where people post their opinions on different topics and invite others to challenge them. The study used AI bots to influence people's opinions by writing and posting hundreds of AI-generated comments. The bots, which personalized their responses based on the political orientation, age, gender and other attributes of the original poster, were about three to six times more successful than humans at persuading people, the study found.

This Fraud Detection Startup Made $100 Million Protecting Against Deepfake Calls
This Fraud Detection Startup Made $100 Million Protecting Against Deepfake Calls

Forbes

time24-04-2025

  • Business
  • Forbes

This Fraud Detection Startup Made $100 Million Protecting Against Deepfake Calls

In 2024, Pindrop bolstered its offerings with a new product to use AI technology and determine if the caller is a machine or not. Photo by Matthias Balk/picture alliance via Getty Images) In January of last year, Atlanta-based startup Pindrop, a robocall and fraud-busting platform used mostly by call centers, had its 15 minutes of fame by defending the president. AI technology was being used to clone and impersonate former President Joe Biden's voice in New Hampshire, discouraging Democrats from voting. Pindrop was referenced across national media outlets as it accomplished what only a few in the space could: it identified the fraud at play and leveraged its massive collection of audio recordings to figure out what technology was used. Flash forward more than a year, and Pindrop has passed a new milestone in its more than 10 years of operations by reaching annual recurring revenue of more than $100 million. That growth is built on an increasingly lucrative offering in this new age of AI: Fighting deepfakes, or digitally created hoax recordings, images or videos, often used for nefarious reasons. 'Its growth reflects both the urgency of the challenge and the standout accuracy of its platform,' Martin Casado, a general partner at Andreessen Horowitz, a Pindrop investor, told Forbes. Pindrop offers three main products that combat fraud and identity theft. Its core products authenticate phone calls by verifying the caller's voice or if they're calling from a trusted device. In 2024, it bolstered its offerings with a new product to use AI technology and determine if the caller is a machine or not. Pindrop's services are already used at the call centers of eight of the ten largest banks, to screen calls, identifying suspicious speech patterns and outing fraudsters. And the company has been making inroads into health care and retail in recent years. Fighting voice impersonation hasn't always been a booming business. Pindrop entered the deepfake space in 2017 and quickly was noticed for identifying false voice clips from a documentary about chef Anthony Bourdain in 2018. These early detection abilities would evolve into its proprietary deepfake-identifying product. Before OpenAI released ChatGPT in 2023, the company saw one deepfake phone call across its entire customer base monthly. At that time, it would simply detect the call as fraudulent, and then staffers would analyze it. 'Twenty people at Pinrop would show up on a Zoom call and we'd all listen to the call,' CEO Vijay Balasubramaniyan said. But by the end of 2024, it was flagging an average of seven false calls per day, per customer. Balasubramaniyan says the company has seen a 111% increase in deepfake-related fraud attempts since 2022. 'What we have found is within the first four or five seconds of a deepfake call, they make so many mistakes and they make them in such unique ways that not only can we identify it's a deepfake, we know which engine made the mistake,' Balasubramaniyan says. Pindrop has been profitable since 2020, and completed a debt raise of $100 million last summer from Hercules Capital. It has raised more than $235 million in total venture capital and was last valued at $925 million with its $90 million raise in 2018. Balasubramaniyan says the 280 person company is not currently raising more money. 'After raising $200 million, man, you don't want more money, you have to be a real business,' he explained. Now, Pindrop is further building out this side of the business by continuing to invest in its deepfake detection abilities and aiming to integrate with job application processes and video meetings to detect more than just audio deepfakes. John Chambers, an angel investor and board member at Pindrop, who worked for decades as CEO at Cisco, says the total deepfake market is potentially worth tens of billions of dollars. "Customers will pay an unbelievable amount of money to know that they're not on a deepfake supply chain [or job applicant] call,' Chambers said, adding, 'It's a $60 billion market.' And Pindrop has positioned itself well in this goal. From its decade of fraud fighting, it has a database of 5 billion call recordings with 20 million deepfake audio files. The data set has allowed the company to sharpen its competitive edge against startups like identity verification provider Prove or deepfake protection platforms like Reality Defender and GetReal. In AI, that means keeping current with a fast-evolving technology and ever more realistic fakes. But Balasubramaniyan is confident in the company's AI detection abilities because audio data is so dynamic. The voice contains thousands of moments every second at which it can be examined, he says. For every advantage Pindrop scores with its tech, a bad actor will surely find a new attack, Balasubramaniyan says. Pindrop's advantage is that it's still more expensive for fraudsters to generate a deepfake than to detect one. 'Deepfakes are going to get better, but we're not going to be sitting around twiddling our thumbs,' Balasubramaniyan says.

Sophisticated con artists aren't just using AI and deepfakes to steal money, they're stealing jobs as well
Sophisticated con artists aren't just using AI and deepfakes to steal money, they're stealing jobs as well

Yahoo

time20-04-2025

  • Business
  • Yahoo

Sophisticated con artists aren't just using AI and deepfakes to steal money, they're stealing jobs as well

As if the job market wasn't tough enough, now job seekers must compete with con artists using stolen identities, AI and deepfake techniques to get hired. Even technology companies can fall for the scams. Pindrop Security, a company that helps detect fraud in voice interactions, has encountered such situations firsthand. The company shortlisted a candidate named 'Ivan' for a senior engineering position and set up a video interview. But as the CEO Vijay Balasubramaniyan shared with CNBC, something felt off during the video interview. The candidate's facial expressions didn't quite match his voice. Turned out Balasubramaniyan's gut feeling was right. The person on screen was using deepfake technology to conceal his own identity by using someone else's face. The Pindrop team caught it. But not everyone that's hiring a worker remotely has the same expertise or technology to root out fraud. 'We are no longer able to trust our eyes and ears,' Balasubramaniyan said. 'Without technology, you're worse off than a monkey with a random coin toss.' Fraudulent candidates now use a range of generative AI tools to scam their way through nearly every part of the hiring process. AI can fabricate photo IDs, generate polished LinkedIn profiles and even simulate real-time answers in video interviews. Some scammers use remote desktops to route their traffic through the U.S., making it appear as if they're logging in from within the country. These scammers aren't just stealing jobs. Once hired, imposters can install malware, steal sensitive customer data or divert money to foreign adversaries — like North Korea. 'Every time we list a job posting, we get 100 North Korean spies applying to it,' said Lili Infante, CEO of CAT Labs, a Florida-based startup related to cybersecurity and cryptocurrency. I'm 49 years old and have nothing saved for retirement — what should I do? Don't panic. Here are 5 of the easiest ways you can catch up (and fast) Nervous about the stock market in 2025? Find out how you can access this $1B private real estate fund (with as little as $10) Thanks to Jeff Bezos, you can now become a landlord for as little as $100 — and no, you don't have to deal with tenants or fix freezers. Here's how The U.S. Justice Department alleges that 300 American companies hired people linked to the North Korean regime. The department says the workers stole American identities to get remote IT jobs and then funneled million to North Korea. The rise in fake applicants doesn't just pose a cybersecurity threat. It could slow down hiring across the board. The cost of Legitimate candidates face longer wait times as companies are forced to double-check résumés, verify identities, and flag suspicious activity. These delays drive up costs. Some companies are turning to third-party verification services like Jumio, Socure, and iDenfy. Others, like Pindrop, are developing in-house video authentication tools. But as deepfake and identity-masking tools improve, experts warn the problem may get worse before it gets better. The research and consultancy firm Gartner predicts that by 2028, one in four job candidates globally will be fake. 'Folks think they're not experiencing it,' said Ben Sesser, CEO of BrightHire, an HR tech company, 'but I think it's probably more likely that they're just not realizing that it's going on.' Read more: The US stock market's 'fear gauge' has exploded — but this 1 'shockproof' asset is up 14% and helping American retirees stay calm. Here's how to own it ASAP This article provides information only and should not be construed as advice. It is provided without warranty of any kind.

How a failed job interview reveals a troubling new trend of AI deepfake workers
How a failed job interview reveals a troubling new trend of AI deepfake workers

Yahoo

time14-04-2025

  • Business
  • Yahoo

How a failed job interview reveals a troubling new trend of AI deepfake workers

Good morning! When I first saw Dawid Moczadło's video on LinkedIn I almost couldn't believe my eyes. In it, the cofounder of a data security software company interviews a job candidate and realizes, in real time, that the candidate is using deepfake AI to hide his true appearance. Many people use fake backgrounds during interviews to hide, say, messy apartments, but this was different. In the video, as the job candidate moves, his screen starts to glitch, and the edges of his face blur. 'I thought that it could be like a Snapchat face filter, where if you cover your face—it kind of fails and goes away,' Moczadło told me. When he realized something was off, he asked the candidate to hold his hand in front of his face. But the candidate simply ignores him and continues with the interview. When he asked a second time, only to be ignored once more, Moczadło decided to end the interview. 'Before this happened we just gave people the benefit of the doubt, that maybe their camera is broken,' Moczadło told me. 'But after this, if they don't have their real camera on, we will just completely stop [the interview].' Moczadło is the only one facing this issue. HR leaders and job recruiters are increasingly running into deepfake AI candidates—people who assume a different identity both on paper and in interviews, to apply for jobs. Around 17% of hiring managers say they've encountered candidates using deepfake technology to alter their video interviews, a recent survey from career platform Resume Genius found. Another top executive, who decided to dig into the issue at his own company, found that out of 827 applications for a software developer job, about 100 were attached to fake identities. 'It blew our mind,' said Vijay Balasubramaniyan, CEO of Pindrop, a 300-person information security company. 'This was never the case before, and tells you how in a remote-first world, this is increasingly becoming a problem.' Luckily there are ways recruiters and HR leaders can spot candidates using deepfake technologies. To read more, check out my latest story for Fortune. Brit This story was originally featured on

Job applicants are using deepfake AI to trick recruiters—Here's how hiring managers can spot the next imposter
Job applicants are using deepfake AI to trick recruiters—Here's how hiring managers can spot the next imposter

Yahoo

time11-04-2025

  • Business
  • Yahoo

Job applicants are using deepfake AI to trick recruiters—Here's how hiring managers can spot the next imposter

Vijay Balasubramaniyan knew there was a problem. The CEO of Pindrop, a 300-person information security company, says his hiring team came to him with a strange dilemma: they were hearing weird noises and tonal abnormalities while conducting remote interviews with job candidates. Balasubramaniyan immediately thought the issue might be interviewees using deepfake AI technology to mask their true identities. But unlike most other companies, Pindrop was in a unique position as a fraud-detecting organization to investigate the mystery itself. To get to the bottom of it, the company posted a job listing for a senior back-end developer. It then used its own in-house technology to scan candidates for potential red flags. 'We started building these detection capabilities, not just for phone calls, but for conferencing systems like Zoom and Teams,' he tells Fortune. 'Since we do threat detection, we wanted to eat our own dog food, so to speak. And very quickly we saw the first deepfake candidate.' Out of 827 total applications for the developer position, the team found that roughly 100, or about 12.5%, did so using fake identities. 'It blew our mind,' says Balasubramaniyan. 'This was never the case before, and tells you how in a remote-first world, this is increasingly becoming a problem.' Pindrop isn't the only company getting a deluge of job applications attached to fake identities. Although it's still a nascent issue, around 17% of hiring managers have already encountered candidates using deepfake technology to alter their video interviews, according to a March survey from career platform Resume Genius. And one startup founder recently told Fortune that about 95% of the résumés he receives are from North Korean engineers pretending to be American. As AI technology continues to progress at a rapid clip, businesses and HR leaders must prepare for this new twist to an already-complicated recruiting landscape, and be prepared to face the next deepfake AI candidate who shows up for an interview. 'My theory right now is that if we're getting hit with it, everybody's getting hit with it,' says Balasubramaniyan. Some AI deepfake job applicants are simply attempting to land multiple jobs at once to boost their income. But there is evidence to suggest that there are more nefarious forces at play that can lead to big consequences for unwitting employers. In 2024, cybersecurity company Crowsdtrike responded to more than 300 instances of criminal activity related to Famous Chollima, a major North Korean organized crime group. More than 40% of those incidents were sourced to IT workers who had been hired under a false identity. 'Much of the revenue they're generating from these fake jobs is going directly to a weapons program in North Korea,' says Adam Meyers, a senior vice president of counter adversary operations at Crowdstrike. 'They're targeting login, credit card information, and company data.' And in December 2024, 14 North Korean nationals were indicted on charges related to a fraudulent IT worker. They stand accused of funnelling at least $88 million from businesses into a weapons program over the course of six years. The Department of Justice also alleges that some of those workers also threatened to leak sensitive company information unless their employer paid them an extortion fee. Dawid Moczadło, the co-founder of data security software company Vidoc Security Lab, recently posted a video on LinkedIn of an interview he did with a deepfake AI job candidate, which serves as a masterclass in potential red flags. The audio and video of the Zoom call didn't quite sync up, and the video quality also seemed off to him. 'When the person was moving and speaking I could see different shading on his skin and it looked very glitchy, very strange,' Moczadło tells Fortune. Most damning of all though, when Moczadło asked the candidate to hold his hand in front of his face, he refused. Moczadło suspects that the filter used to create a false image would begin to fray if that happened, much like it does on Snapchat, exposing his true face. 'Before this happened we just gave people the benefit of the doubt, that maybe their camera is broken,' says Moczadło. 'But after this, if they don't have their real camera on, we will just completely stop [the interview].' It's a strange new world out there for HR leaders and hiring managers, but there are other tell-tale signs they can watch out for earlier on in the interview process that can save them major headaches later on. Deepfake candidates often use AI to create fake LinkedIn profiles that appear real, but are missing critical information in their employment history, or have very little activity or few connections, Meyers notes. When it comes to the interview stage, these candidates are also often unable to answer basic questions about their life and job experience. For example, Moczadło says he recently interviewed a deepfake candidate who listed multiple well-known organizations on their resume, but couldn't share any detailed information about those companies. Employers should also look out for new hires who ask to have their laptop shipped to a location other than their home address. Some people are operating 'laptop farms,' in which they keep multiple computers open and running so that people outside the country can log in remotely. And finally, employee impersonators are typically not the best workers. They often don't turn on their cameras during meetings, make excuses to hide their faces, or skip work gatherings altogether. Moczadło says he's much more careful about hiring now, and has implemented new procedures into the process. For example, he pays for candidates to come into the company's office for at least one full day in-person before they're hired. But he knows not everyone can afford to be so vigilant. 'We're in this environment where recruiters are getting thousands of applications,' says Moczadło. 'And when there's more pressure on them to hire people they're more likely to overlook these early warning signs and create this perfect storm of opportunity to take advantage of.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store