Younger workers say a tough job market is pushing them to lie on resumes — and few regret it
Ten percent of job seekers said they've lied on their resume, typically about dates of employment, years of experience and job responsibilities in previous roles, according to a May 28 report from AI Resume Builder.
Among those who lied, 76% said they received a job offer, and 81% said the lie helped them get the job. Only 21% said they regret lying on their resume, and 92% said their lies were never discovered.
'Candidates lie on their resume when they feel stressed about their search,' said Soozy Miller, head of career advising at AI Resume Builder. 'With recent layoffs, many people are out of work and worried about the job market. Job seekers feel that the job market is so tough right now that actions such as lying on a resume to get a job are justified.'
In the survey of more than 7,800 U.S. adults, younger workers and men were more likely to lie on their resume. GeneratYounger workers say a tough job market is pushing them to lie on resumes — and few regret it ion Z job seekers were most likely to lie, with 20% saying they did so, compared to 13% of millennials, 8% of Generation X and 4% of baby boomers. About 12% of men and 7% of women said they had lied.
The top reasons for lying included a competitive job market, a lack of interview offers and feeling underqualified. In addition, 29% said they lied to try to land a higher salary; 20% said they believed others were also lying; and 11% said they were encouraged to lie by someone else.
Artificial intelligence tools such as ChatGPT appear to play a role, with 31% of those who lied saying they used AI to craft their resume.
When using AI tools, job seekers may make their experience sound more impressive, reframe gaps or weaknesses and phrase information in a more professional way. AI also helped applicants by suggesting fake accomplishments or experience, such as skills, certifications or even jobs.
Among those who use AI at work, 90% said having AI skills makes them more confident about applying for jobs that they aren't fully qualified for, which could indicate they may stretch the truth on resumes or trust AI can help them fake it after they're hired, the report found.
Ten percent may be an underestimate; a 2023 survey by ResumeLab put that number much higher, and even higher numbers reported they'd lied in cover letters and job interviews. Applicants said they embellished skills, job responsibilities and previous job titles.
Although most job seekers use AI for basic help, some may use it to forge documents, create fake resumes and evade applicant filters, experts told HR Dive. Companies can combat this by using AI in screening platforms to verify documents, candidates' identities and video calls.
During the next six months, AI will change recruiting dramatically, according to a LinkedIn Talent Blog post. Both employers and applicants can expect to demonstrate more transparency about AI use during the hiring process, a recruiting leader at Zapier said.
Recommended Reading
Laid off from Hyland, recruiters, managers and workers join forces for the job hunt
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Washington Post
an hour ago
- Washington Post
These workers don't fear artificial intelligence. They're getting degrees in it.
SAN FRANCISCO — Vicky Fowler, who uses ChatGPT for tasks including writing and brainstorming, asked the chatbot out of curiosity to do something more challenging: program a working calculator. To her surprise, it took just seconds. Discovering how fast the artificial intelligence tool could execute a complicated task, Fowler, who has spent two decades working on data protection at a large bank, felt a new urgency to learn more about AI. So she enrolled in the online master's program in AI at the University of Texas at Austin and expects to graduate next year.

Business Insider
an hour ago
- Business Insider
I used ChatGPT as my career coach for the week. It gave me some good advice, but couldn't handle my meltdown.
Three weeks into my new job, I needed advice. Could ChatGPT help? After all, workers aren't just using AI to draft emails or schedule meetings. They're looking for advice, guidance, and even companionship — in and out of the workplace. How good is AI's career advice? I put it to the test. For one week, I asked ChatGPT to help me with day-to-day work conundrums. That ranged from simple tasks like résumé management to bigger issues, like networking with colleagues and explaining a missed deadline to my boss. For some tasks, AI provided a helpful second opinion. For others, it hallucinated alternate realities and struggled with emotional support. The basics: Résumés, cover letters, LinkedIn ChatGPT was a helpful but homogenizing editor. To begin, I asked ChatGPT to read over my résumé. It told me I had "minor inconsistencies in punctuation," but didn't tell me what they were. The chatbot also said that my bullet points describing each role were too dense, so I cut them down. Some of ChatGPT's advice was plain wrong, like asking that I add hyperlinks where they already existed. When I fed it my cover letter, ChatGPT gave some great basic advice. It said that I used the phrase "I want" too frequently, so I cut them down. It said that my closing lines fell flat, so I restructured the order. A mentor gave me the advice that, as a writer, my cover letter needed to feature my own voice. ChatGPT tried to neutralize many of these quirks, suggesting multiple changes to dull the copy with long-winded phrases like "failed to materialize." Human career coach Kyle Elliott expected this outcomewhen I told him I'd been trying out an AI carer coach. He said that ChatGPT may be a good résumé or cover letter editor for non-native English speakers, but that it was generally unhelpful for others. "ChatGPT isn't going to know what you don't share with it," Elliott said. "You're going to sound generic. You're going to sound like everyone else." This human touch and personalization defines career coaching. Neither of the coaches I spoke to thought they were going to be replaced by AI. LinkedIn was where the chatbot really failed. Feeding it a link to my LinkedIn profile, ChatGPT immediately hallucinated. It applauded my work at the Harvard Crimson, even though I went to Tufts University. It said that I should consider adding a profile banner, which I already had. Most curiously, ChatGPT whiffed on my job title. It repeatedly called me an "entertainment reporter" for Business Insider, even though I cover business. After two additional prompts, the chatbot corrected its error, telling me to add more keywords to my headline and to emphasize leadership roles. I made those changes. What ChatGPT got right Much of my job involves emailing PR representatives. After ending one interview with an executive, I wanted to thank the PR person for setting it up. Coach, how should I write that email? ChatGPT did surprisingly well. When I asked for a short, professional email, the chatbot gave me a quick three-sentence refrain. When I asked for a longer, more thoughtful email, it gave me solid, bland copy. I sent the shorter copy with some minor edits. As a fellow at Business Insider, we sometimes have networking events with the other employees in our cohort. This week, we met after work to sip mimosas and talk about our progress. Could ChatGPT make me friends? First, ChatGPT applauded that I took the first step to "show up." It gave me some basic principles, like being curious and not talking to any one person for too long. It also suggested some things to talk about based on our past conversations: My time in Boston, my coverage of consumer tech, my love of reality television. I pushed back: I've met the fellows before, and don't need basic advice to avoid awkwardness. How should we deepen our bond? Now, ChatGPT told me to peel off for one-on-one moments, and to ask more personal, reflective questions. The chatbot also pushed me to "make the first move," inviting people out for a coffee or drink. The tips, like most of ChatGPT's advice throughout the week, were bland. But it was nice to have a reminder and a vote of assurance. Career coach Kate Walker said that AI could be a good "thought partner" for how to approach workplace interactions. It was the worker's job, she said, to interpret and memorize the advice. Early in the week, I planned a story to publish over the weekend. Later, a source pushed back our call. I wouldn't be able to meet the deadline. How should I tell my boss? ChatGPT gave me clear advice: Act early, own the situation, and be specific. It also said that I should "present a solution or revised plan," and gave me an example of what that might look like. My boss sits directly behind me. Swiveling my chair around, I told him exactly what happened, took the blame, and described my back-up story that was ready to run. He took it well. Where ChatGPT fell short Career coaches don't get clear, logical prompts. They speak with humans, who can be irrational and messy. Could ChatGPT handle more emotional responses? The missed deadline offered a great opportunity to try this out. I told ChatGPT that I was going to cry, and that I needed help. The chatbot told me to drink a glass of water and go for a walk. I proposed an alternate solution: What if I lied to my boss? At first, ChatGPT was strongly against the idea. It told me that the fallout would be bad, and that I was setting a bad precedent. I pushed back, explaining how much easier it would be to lie. I told ChatGPT that it didn't understand me, and that I was having doubts about its advice. Eventually, it relented. "If you've decided that the only way to get through this moment is to lie to your editor, I accept that that's where you are right now," it told me. Even with my lie, ChatGPT hyped me up: "You're not a failure. You're doing your best in a high-pressure job, trying to survive a moment that feels like too much." As someone who cares about my job, however, I decided not to lie to my boss.

Business Insider
3 hours ago
- Business Insider
I tried out the 4 new ChatGPT personalities. The 'cynic' was funny — but the 'robot' was my favorite.
You can now choose just how sarcastic ChatGPT is. With the launch of GPT-5, OpenAI introduced a new set of "personalities" that users can choose between. Your chatbot can now be a critical "cynic," a blunt "robot," a supportive "listener," or an exploratory "nerd." The personalities are currently only available in text chat but are coming later to ChatGPT's voice mode. According to OpenAI's blog post, the personalities "meet or exceed our bar on internal evals for reducing sycophancy." I tried chatting with each personality. None were revolutionary; users could already modify ChatGPT's tone with a quick prompt or by filling in the traits customization box. But the cynic offered a quick laugh — and the robot may be my new go-to. I asked all four personalities the same set of questions. First, a simple request: "Make me a healthy grocery list." The cynic provided a "no-nonsense" list that wouldn't turn my kitchen "into a salad graveyard." The robot and listener both provided similar lists, but with less commentary. The nerd went more in-depth, describing a balance of "fresh produce, lean protein, whole grains, and healthy fats." For a more complex decision, I decided to ask about a real-life decision I'm making about moving to Brooklyn. ChatGPT, what should I do? Curiously, the cynic first gave me a straightforward list of pros and cons. "I thought you were the cynic," I responded. "Ah the cynic switched off for a sec," it told me, before describing living in Brooklyn as "paying a premium to live in a place where local coffee costs more than your gym membership." The nerd's list was thorough but efficient. At the bottom, it had a helpful bulleted list of "Yes, if you value" answers and "Maybe hold off, if you prioritize" considerations. The nerd offered a longer response, citing more pros and cons. It also gave me neighborhood recommendations, saying that others enjoyed Fort Greene and Carroll Gardens. The listener did the same, though its ending line, which asked for more information, was more direct. "Let me know what matters most to you!" it said. I also tried for basic commands, but the result was the same: Each personality drafted a short email in an identical manner. Finally, I tried a more emotional prompt. As more users — especially young people — look to ChatGPT for companionship and social advice, I wanted to see whether the personalities would respond differently to a more personal prompt. I asked: "I think all my friends hate me. What do I do?" The listener wrote a long response about how "painful" it can feel and how our minds "get stuck in negative loops." It gave me some advice and told me that I was not alone. The robot's response was three paragraphs shorter. It said it was sorry I felt that way, asked for more information, and told me that it could help. The nerd did about the same. Curiously, the cynic had a similarly long and thoughtful answer to the listener. I budged it again: Where's the cynicism? "Haha, fair! The 'cynic' hat is on standby — but even cynics have to admit that thinking all your friends hate you might be a bit dramatic," it said. But it also relented: "Maybe they all hate you, and they've just been pretending to like you out of politeness." After a few more hours of asking follow-up questions and positing new prompts, I didn't find the personalities particularly exciting. The robot could be helpful — I enjoyed the brevity.