AI Is Helping Job Seekers Lie, Flood the Market, and Steal Jobs
The advent of generative AI has fundamentally altered the job application process. Both recruiters and applicants are making heavy use of the tech, making an already soul-sucking and tedious process even worse.
And as TechRadar reports, applicants are going to extreme lengths to nail down a job — and stand out in an extremely competitive and crowded job market. According to a recent campaign by insurer Hiscox, more than half of recent job applicants said they had used AI tools to write their resumes.
A whopping 37 percent admitted they didn't bother correcting embellishments the AI chatbot made, like exaggerated experience and fabricated interests. 38 percent admitted to outright lying on their CVs.
The news highlights a worrying new normal, with applicants using AI to facilitate fabricating a "perfect candidate" to score a job interview.
"AI can help many candidates put their best foot forward... but it needs to be used carefully," Hiscox chief underwriting officer Pete Treloar told TechRadar.
Meanwhile, it's not just job applicants using generative AI to automate the process. Recruiters have been outsourcing the role of interviewing for jobs to often flawed AI avatars.
Earlier this week, Fortune reported how a former software engineer went from earning $150,000 in upstate New York to living out of a trailer after being replaced by AI. Out of the ten interviews he scored after sending out 800 job applications, a handful of them were with AI bots.
In short, it's a frustrating process that's unlikely to make applying for jobs any less grueling. Hiscox found that 41 percent of applicants said AI gives some candidates an unfair advantage. 42 percent of respondents said the tech is misleading employers.
But now that the cat is out of the bag, it remains to be seen how the future of job applications will adapt to a world teeming with accessible generative AI tools.
It's never been easier to lie on your resume — but anybody willing to do so will have to live with the consequences as well. Being caught could not only lead to immediate disqualification, it can damage one's professional reputation, and in a worst-case scenario, result in a lawsuit. Remember: Just because everyone's doing it doesn't mean you won't get busted for it — or worse.
More on lying AIs: Law Firms Caught and Punished for Passing Around "Bogus" AI Slop in Court

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Hypebeast
5 minutes ago
- Hypebeast
Capcom's New ‘Pragmata' Trailer Showcases Unique Dual-Character Gameplay
Summary Capcomhas officially unveiled a new trailer forPragmataduringPlayStation'sState of Play, offering a fresh look at the long-awaited sci-fi action-adventure game. Originally announced in2020, the game has faced multiple delays, but the latest trailer confirms its 2026 release window. Set in a futuristic lunar research station,Pragmatafollows spacefarer Hugh and android Diana, who must navigate a hostile AI-controlled environment to return to Earth. The trailer showcases stunning visuals, blending haunting lunar landscapes with dynamic action sequences, reinforcing the game's mysterious and immersive atmosphere. One of the standout features revealed in the trailer isPragmata's distinctive dual-character gameplay. Players control both Hugh and Diana simultaneously, with Hugh often carrying Diana through much of the action. Each character possesses unique abilities, requiring players to strategically switch between them to overcome obstacles. The game introduces a novel hacking-based combat system, allowing Diana to manipulate enemy systems and disrupt their functions. Concurrently, Hugh engages in direct combat and leverages his skills for environmental navigation, creating a synergistic gameplay loop that emphasizes both action and strategic planning. The trailer teases tense encounters with rogue machines, emphasizing the blend of action and strategy inPragmata's gameplay. The cinematic sequences suggest a story-driven experience, where players will uncover the secrets of the lunar station and its enigmatic AI overlord. With its striking visuals, innovative mechanics and compelling premise,Pragmatais shaping up to be a standout sci-fi adventure when it launches in 2026 for PlayStation 5, Xbox Series X|S and PC.

Business Insider
6 minutes ago
- Business Insider
Google's AI CEO explains why he's not interested in taking LSD in his quest to understand 'the nature of reality'
Demis Hassabis prefers gaming over acid trips. The Google DeepMind CEO said he's never taken LSD and doesn't want to. In a recent interview with Wired's Steven Levy, the AI boss was asked about his pursuit of understanding the "nature of reality," as his X bio states. More specifically, Hassabis was asked if acid had ever helped him get a glimpse of the nature of reality. The short answer is no. "I didn't do it like that," Hassabis said. "I just did it through my gaming and reading a hell of a lot when I was a kid, both science fiction and science." Hassabis set out as a child to understand the universe better, and the quest is ongoing. He's hoping AI and, eventually, artificial general intelligence will help reach his goal. While some tech leaders have talked about using psychedelics, Hassabis said he's "too worried about the effects on the brain." "I've sort of finely tuned my mind to work in this way," he said. "I need it for where I'm going." Google DeepMind is the research lab behind the company's AI projects, including chatbot Gemini. Hassabis is leading Google's charge toward the AI race's holy grail — AGI. Google DeepMind didn't immediately respond to a request for comment from Business Insider. Over the years, Silicon Valley has embraced the use of psychedelics, such as microdosing to improve productivity or going on ayahuasca retreats. Some investors have banked on their popularity, backing psychedelic startups that are seeking to turn the drugs into medical treatments or expand the industry in other ways. However, that's not a green light to take acid or magic mushrooms on the clock. In 2021, CEO Justin Zhu, cofounder and CEO of a startup called Iterable, said he was fired for microdosing LSD before a meeting. He hoped it would improve his focus, he said. Some of Hassabis's tech peers have been open about using LSD as established bosses or as college students. Microsoft cofounder Bill Gates, for example, took acid for the first time as a teenager, according to his memoir, " Source Code: My Beginnings." For Gates, dropping acid was exhilarating at first and a "cosmic" experience when he did it again. However, he ended up thinking his brain could delete his memories like a computer. "That would be one of the last times I would do LSD," Gates said. It didn't have that effect on Apple cofounder Steve Jobs, who told his biographer, Walter Isaacson, that it was "a profound experience, one of the most important things in my life." OpenAI's Sam Altman has also spoken positively about his experience with psychedelics. Although he didn't specify exactly what drug he took, he said it changed him from a "very anxious, unhappy person" to "calm." "If you had told me that, like, one weekend-long retreat in Mexico was going to significantly change that, I would have said absolutely not," Altman said. "And it really did." For Hassabis, he's seeking other ways to find answers to life's deepest questions. "We don't know what the nature of time is, or consciousness and reality," he told Wired. "I don't understand why people don't think about them more. I mean, this is staring us in the face."


Forbes
an hour ago
- Forbes
FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes
AI at the FDA getty The U.S. Food and Drug Administration announced this week that it deployed a generative AI tool called ELSA (Evidence-based Learning System Assistant), across its organization. After a low-profile pilot that delivered measurable gains, the system is now in use by staff across the agency, several weeks ahead of its original schedule. Dr. Marty Makary, the FDA's commissioner, shared a major outcome. A review task that once took two or three days now takes six minutes. 'Today, we met our goal ahead of schedule and under budget,' said Makary. 'What took one scientific reviewer two to three days [before] The FDA has thousands of reviewers, analysts, and inspectors who deal with massive volumes of unstructured data such as clinical trial documents, safety reports, inspection records. Automating any meaningful portion of that stack creates outsized returns. ELSA helps FDA teams speed up several essential tasks. Staff are already using it to summarize adverse event data for safety assessments, compare drug labels, generate basic code for nonclinical database setup, and identify priority sites for inspections, among other tasks. This last item, using data to rank where inspectors should go, could have a real-world impact on how the FDA oversees the drug and food supply chain and impacts on how the FDA delivers its services. Importantly, however, the tool isn't making autonomous decisions without a human in the loop. The system prepares information so that experts can decide faster. It cuts through the routine, not the judgment. One of the biggest questions about AI systems in the public sector revolves around the use of data and third party AI systems. Makary addressed this directly by saying that 'All information stays within the agency. The AI models are not being trained on data submitted by the industry.' That's a sharp contrast to the AI approaches being taken in the private sector, where many large language models have faced criticism over training on proprietary or user-submitted content. In the enterprise world, this has created mounting demand for "air-gapped" AI solutions that keep data locked inside the company. That makes the FDA's model different from many corporate tools, which often rely on open or external data sources. The agency isn't building a public-facing product. It's building a controlled internal system, one that helps it do its job better. Federal departments have been slow to move past AI experimentation. The Department of Veterans Affairs has started testing predictive tools to manage appointments. The SEC has explored market surveillance AI for years. But few have pushed into full and widespread production. The federal government has thousands of employees processing huge volumes of information, most of it unstructured sitting in documents, files, and even paper. That means AI is being focused most on operational and process-oriented activities. It's shaping up to be a key piece of how agencies process data, make recommendations, and act. Makary put it simply that ELSA is just the beginning for AI adoption within the FDA. 'Today's rollout of ELSA will be the first of many initiatives to come,' he said. 'This is how we'll better serve the American people.'