
Fraudulent Candidates, Fake Interviews And AI: The New Hiring Crisis
Bad actors are using AI everywhere—from classrooms to corporate offices. They're stealing grades, stealing jobs, and, yes, stealing money.
Take Christina Marie Chapman, a 48-year-old woman from Litchfield Park, Arizona, for example. She recently pleaded guilty in U.S. District Court in Washington, D.C., in connection with a fraud scheme that helped overseas IT workers pose as U.S. citizens and land remote tech jobs at more than 300 American companies. The kicker? The operation funneled more than $17 million to North Korea.
So no, candidate fraud isn't just about white lies on a resume anymore. It's organized, weaponized and spreading fast.
But let's bring it closer to home: Most businesses aren't lying awake at night worried about North Korean hackers infiltrating their workforce. What they are worried about is keeping up with the 'small stuff'—the kind of low-grade but high-impact fraud happening every day in hiring funnels across every industry.
It's like what professors are facing right now: trying to outpace students using ChatGPT to ace their essays. Hiring managers are dealing with the same dilemma. We used to trust a resume. Now we can't even trust that.
AI is transforming hiring—and not always in the ways we hoped. While it's helping teams make faster, smarter decisions, it's also becoming a go-to tool for dishonest candidates. From AI-generated resumes and cover letters to deepfake interviews, impersonations and fake references, we've entered a full-on Wild West moment in talent acquisition.
Industry experts are reporting a spike in fake job seekers—especially in cybersecurity and crypto, where remote work and technical screening create ideal conditions for fraud to thrive.
This isn't just a college kid cheating on an online exam. This is systemic candidate fraud—and it's costing companies real money.
Here's what leaders need to understand: fraudulent candidates don't just pose an ethical concern—they're a business risk. According to a recent analysis my company, Crosshq, conducted, which reviewed more than 200,000 hiring signals, 72% of candidates admit to lying on their resumes, 38% have lied in interviews, and 45% say that's totally acceptable behavior.
It's not just dishonest—it's damaging. Candidates flagged for fraudulent behavior had a 27% lower quality of hire than their honest peers, according to the same study. That hits productivity, morale and retention. And by the time you figure it out? You're already backfilling.
And the financial cost? The U.S. Department of Labor estimates that a bad hire can cost up to 30% of that employee's first-year salary—not including the impact on team cohesion, missed goals and customer experience.
So what can hiring teams do? Here's what forward-thinking companies are already putting in place:
Some companies I've spoken with in the industry are creating standup facilities solely for in-person interviews. It's the hiring equivalent of college entrance exams going back to proctored settings. You want to make sure the person you're hiring is the person you're actually hiring.
Resume screening tools—especially those that detect inconsistencies across data sources—can flag red flags early. It's not about catching cheaters with a spotlight. It's about shining a light on the truth before it's too late.
Use identity verification tools to confirm that the person applying for the job is who they say they are. Especially for remote roles or technical positions, this step is quickly becoming a best practice.
Structured assessments and job simulations can provide strong signals about whether a candidate has the real skills for the role. Many companies now include browser tracking, IP logging or time-on-task analytics to catch signs of suspicious activity.
Ask deeper questions. Don't just ask what someone did—ask how they did it. The more detail required, the harder it is to fake. And if you are doing remote video interviews, make sure you use a video platform with deepfake—and fraud—detection technologies.
This problem isn't going away. In fact, as AI evolves faster than our policies, ethics and hiring infrastructure can keep up, I believe candidate fraud is likely to get worse before it gets better. But with a little more rigor—and a lot more transparency—we can start to build hiring systems that reward authenticity over algorithmic trickery.
Because at the end of the day, it's not just about finding people who can do the job. It's about finding people you can trust to be the job.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
2 minutes ago
- Fast Company
SOS: Who will throw fact-checked reporting a life raft?
AI is gobbling up journalism for many of the same reasons humans do: to develop a concrete understanding of the world; to think critically; to differentiate between what is true and what's not; to become a better writer; and to distill history and context into something accessible. But what happens to AI when our journalistic institutions crumble? Upon what foundation of truth will it answer everyone's questions? Write their emails? Do their jobs? Because while the alarm bells have been ringing for journalism for decades, the so-called end of search feels like the potential death knell. What does that mean for AI, and for us as we try to make sense of an increasingly confusing world? In our rush to integrate generative AI into every corner of our lives, we've ignored a fundamental truth: AI cannot function without a baseline of verified facts. And, at the moment, that baseline is built and maintained by so-called 'traditional' journalism (the kind with fact checkers and editors). As AI threatens to upend search, media monetization, and news consumption behaviors, it's also undercutting the very industry that feeds it the facts it depends on. A society cannot function without objective journalism, and neither can AI. Loss of accuracy Recent Apple research says that, 'It doesn't take much to cause generative AI to fall into 'complete accuracy collapse.'' It goes on to show that generative AI models lack strong logical reasoning, unable to function beyond their complexity threshold. I immediately thought of a recent piece from The New Yorker, in which Andrew Marantz weaves together various examples of autocracy, set against thousands of years of history, to (attempt to) make sense of what is happening in America right now. I imagined AI trying to do the same, essentially short-circuiting before being able to form the salient points that make the piece so impactful. When asked to think too hard, the AI breaks. An even more damning report from the BBC reports that AI can't accurately summarize the news. It asked ChatGPT, Copilot, Gemini, and Perplexity to sum up 100 news stories and asked expert journalists to rate each answer. 'As well as containing factual inaccuracies, the chatbots 'struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context,'' says the report. Almost one-fifth of the summaries included false facts and quote distortions—19%! There's more, of course. This study from MIT Sloan shows that AI tools have a history of fabricating citations and reinforcing gender and racial bias, while this Fast Company article argues that AI-driven journalism's 'good enough' standards are accepted because of the revenue these tools create. And that, of course, is the less human reason AI is gobbling up journalism: the money. None of that money is going back into funding the journalistic institutions that power this whole experiment. What happens to our society when the core pillar of a true and free press collapses under the weight of the thing that has sloppily consumed it? Our AI lords must place real value on fact-checked reporting—right now—to ensure its continued existence.


Bloomberg
3 minutes ago
- Bloomberg
Trump Abusing Authority With DC Takeover: UNC's Gerhardt
Michael Gerhardt, Distinguished Professor of Jurisprudence at UNC School of Law, talks about President Trump abusing his authority with his takeover of the Washington, DC police and that the President's claim of high crime in DC is contrary to other DC officials in the area. Michael speaks with Joe Mathieu on the late edition of Bloomberg's "Balance of Power." (Source: Bloomberg)


Fox News
3 minutes ago
- Fox News
How Trump is federalizing DC police and deploying National Guard
Fox News correspondent Alexandria Hoff has the latest on how the Justice Department will run Washington, D.C.'s police force on 'Special Report.'