logo
'Grammatically correct and emotionally vacant': The pitfalls of relying on AI too much in your job hunt

'Grammatically correct and emotionally vacant': The pitfalls of relying on AI too much in your job hunt

Job seekers are going overboard using AI.
With so many relying on tools like ChatGPT to cut corners, recruiters say they're getting stuck with piles of robotic-sounding résumés and applicants who appear to be cheating during video interviews.
While the recruiters BI spoke with said there's nothing wrong with using AI to save time writing, research a prospective employer, or practice answering common questions, they warn that applicants are denting their odds of success by becoming too dependent on it. They say the timing of the trend is especially bad given how many people are looking for jobs.
"In this market, you've got to stand out," said Steve Levy, a recruiter in Long Island, New York.
When it comes to résumés, it's not just the well-known overuse of em dashes that's tipping off recruiters. It's also the eerie perfection of the documents themselves.
"Every sentence is grammatically correct, evenly spaced, and emotionally vacant," said Levy, who's been a recruiter for nearly four decades. "Real résumés have scuff marks."
Buzzwords and blunders
Another telltale sign is that résumés written with AI feature the same formatting and hollow buzzwords, such as "dynamic," "innovative," and "cross-functional." Many also contain the same phrases such as "accelerated time-to-market" and "exceeding stakeholder expectations."
Rather than use these, job hunters should find ways to insert personal touches that add substance to their résumés, said Tejal Rives, a hiring manager in Tempe, Arizona, for a large tech company.
"There's no harm in using AI or templates, but the customization is not there," she said. "You're hurting your chances by sounding like everyone else."
AI tools have a tendency to introduce factual errors in résumés, which are often apparent to recruiters because dates, numbers, and other details are inconsistent with what is on a candidate's LinkedIn profile or what the person says during the interview process.
"Take a real fine-tooth comb to what AI produces for you," said Desiree Goldey, a recruiter in Austin, Texas, who reviews an average of 500 résumés a week and estimates that about half are written with AI to the point that it's noticeable. "Yes, it saves you time, but you have to go back and look at it."
Overall, résumés written by AI tend to come across as soulless, she added. The human element is just missing and that's a turn-off.
"You're not portraying yourself with any authenticity," Goldey said. "You become this robot and nobody is going to hire a robot."
AI shows up in interviews, too
Some candidates are also sneaking a peek at AI apps on their phones while interviewing with recruiters over video apps such as Zoom. Recruiters say these individuals repeat the questions they've been asked out loud, then sneak glances to the side of the screen at whatever AI program they're using so they can read the results it spits out. They also usually take a brief pause before answering.
Goldey said it's fine to practice interviewing with AI, but when it's time to go on camera, she expects candidates to look at her the whole time — not their phones.
"To use AI in an interview to me is a no," she said. "It's the worst thing you could do."
Bonnie Dilber, a senior talent-acquisition manager for a fully remote software company, said she's seen such behavior rising over the past few months. She recalled an instance in which a candidate went even further by using an AI filter to impose their face on another person's body. The other individual — the one who could only be seen from the neck down — answered the recruiter's questions.
"You could see the filter glitch," she said.
Dilber suspects people are going to such lengths because they lack confidence in their interviewing skills and believe AI will give them an edge. Or, she said they could be engaging in fraudulent activity, such as trying to understand the interview process so they can sell the information they glean to job seekers.
She and other recruiters on her team use an AI platform that records interviews for generating transcripts and summarizing what's discussed. If they sense something is fishy, they'll review the footage.
"Recording the interviews for us is a huge way that we can catch these things," Dilber said. "I imagine a future in which maybe to begin an interview, you have to scan an ID."
Have a job-search story to share? Contact the reporter via email at sneedleman@insider.com here ' s our guide to sharing.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI releases powerful new open models
OpenAI releases powerful new open models

Axios

time22 minutes ago

  • Axios

OpenAI releases powerful new open models

OpenAI on Tuesday debuted two freely downloadable models that it says can, for certain tasks, match the performance of some modes of ChatGPT. Why it matters: OpenAI is aiming the new models at customers who want the cost savings and privacy benefits that come from running AI models directly on their own devices rather than relying on cloud-based services like ChatGPT or its rivals. It's also pitching the open models for countries that want to avoid getting their AI tools from the cloud servers of Google, Microsoft or other tech giants. The big picture: The arrival of China's DeepSeek earlier this year jolted the open-model world and suggested that China might be taking the lead in that category, while Meta's commitment to its open source Llama project has come into question as the company pivots to the "superintelligence" race. What they're saying: "We're excited to make this model, the result of billions of dollars of research, available to the world to get AI into the hands of the most people possible," CEO Sam Altman said in a statement. "Going back to when we started in 2015, OpenAI's mission is to ensure AGI that benefits all of humanity," Altman said. "To that end, we are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit." Driving the news: OpenAI is releasing two new open models, both capable of chain-of-thought reasoning and accessing the web. They can also, if desired, work in conjunction with larger cloud-based AI models. The first, a 117 billion parameter model called gpt-oss-120b, can run on a single GPU with 80 gigabytes of RAM. The second, with 21 billion parameters called gpt-oss-20b, is designed to run on laptops or other devices with 16 gigabytes of RAM. Both models are available via Hugging Face and other cloud providers. Microsoft is also making available a version of the smaller model that has been optimized to run on Windows devices. The company provided various benchmarks showing the open models performing at or near the performance of the company's o3 and o4-mini models. Yes, but: The new open-models are text-only, as compared to most of OpenAI's recent models, which are so-called multimodal models capable of processing and outputting text, images, audio and video. Between the lines: Technically, the models are "open weights" versus "open source," meaning anyone can download and fine-tune the models but there's no public access to other key information, like training data details.

OpenAI to Give Away Some of the Technology That Powers ChatGPT
OpenAI to Give Away Some of the Technology That Powers ChatGPT

New York Times

time24 minutes ago

  • New York Times

OpenAI to Give Away Some of the Technology That Powers ChatGPT

In a move that will be met with both applause and hand-wringing from artificial intelligence experts, OpenAI said on Tuesday that it was freely sharing two of its A.I. models used to power online chatbots. Since OpenAI unveiled ChatGPT three years ago, sparking the A.I. boom, it has mostly kept its technology under wraps. But many other companies, looking to undercut OpenAI, have aggressively shared their technology through a process called open source. Now, OpenAI hopes to level the playing field and ensure that businesses and other software developers stick with its technology. OpenAI's shift adds more fuel to a long-running debate between researchers who believe it is in every company's interest to open-source their technology, and national security hawks and A.I. safety pessimists who believe American companies should not be sharing their technology. The China hawks and A.I. worriers appear to be losing ground. In a notable reversal, the Trump administration recently allowed Nvidia, the world's leading maker of the computer chips used to create A.I. systems, to sell a version of its chips in China. Many of the San Francisco company's biggest rivals, particularly Meta and the Chinese start-up DeepSeek, have already embraced open source, setting OpenAI up as one of the few A.I. companies not sharing what it was working on. The models being offered by OpenAI, called gpt-oss-120b and gpt-oss-20b, do not match the performance of OpenAI's most powerful A.I. technologies. But they still rank among the world's leading models, according to benchmark test results shared by the company. If people use those newly open-source models, OpenAI hopes they will also pay for its more powerful products. 'If we are providing a model, people are using us,' Greg Brockman, OpenAI's president and one of its founders, said in an interview with The New York Times. 'They are dependent on us providing the next breakthrough. They are providing us with feedback and data and what it takes for us to improve that model. It helps us make further progress.' Open source has been a common practice among software companies for decades. As OpenAI and other companies began developing the kind of technology that would eventually drive chatbots like ChatGPT nearly a decade ago, they often open-sourced them. 'If you lead in open source, it means you will soon lead in A.I.,' said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world's open-source A.I. projects. 'It accelerates progress.' But after OpenAI shared a technology called GPT-2 in late 2019, it stopped open-sourcing its most powerful systems, citing safety concerns. Many of OpenAI's rivals followed its lead. When OpenAI unveiled ChatGPT in late 2022, a growing chorus of A.I. experts argued that open-source technologies could cause serious harm. This kind of technology can help spread disinformation, hate speech and other toxic language. Many researchers also worry that they could one day help people build bioweapons or wreak havoc as governments and businesses connected them to power grids, stock markets and weapons. But the public conversation started to shift in 2023, when Meta shared an A.I. system called LLama. Meta's decision to go against the grain fueled a growing open-source ecosystem in the United States and other parts of the world. By late 2024, when DeepSeek released of a technology called V3, China had shown that its open-source systems could challenge many of the leading U.S. systems. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.) OpenAI said that it has released open-source systems in part because some businesses and individuals prefer to run these kinds of technologies on their own computer hardware, rather than over the internet. One of the new systems, gpt-oss-20b, is designed to run a laptop. The other, gpt-oss-120b, requires a more powerful system equipped with the specialized computer chips used to build the leading A.I. systems. Mr. Brockman acknowledged that. A.I. can be used to both harm and empower people. But he said that the same is true of any powerful technology. He said that OpenAI has spent months — even years — building and testing its new open-source systems in an effort to reduce any harm. The debate over open source is expected to continue as companies and regulators weigh the potential harm against the power of the time-tested tech method. Many companies have changed strategy over the years, and will continue to do so. After creating a new superintelligence lab, Mark Zuckerberg and his fellow executives at Meta are considering their own shift in strategy. They might abandon the A.I. technology that the company has freely shared with researchers and businesses, called Behemoth, and move toward a more guarded strategy involving closed source software.

OpenAI Hits 700 Million Users, $300 Billion Valuation as $8 Billion Cash Burn Is Projected
OpenAI Hits 700 Million Users, $300 Billion Valuation as $8 Billion Cash Burn Is Projected

Business Insider

timean hour ago

  • Business Insider

OpenAI Hits 700 Million Users, $300 Billion Valuation as $8 Billion Cash Burn Is Projected

OpenAI is set to hit a major user milestone, reaching 700 million weekly active users. That is up 40% from 500 million at the end of March and about four times the number from the same period last year. The company says it now handles about 2.5 billion prompts each day from around 330 million individual users. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. The growth comes alongside a large funding boost. OpenAI recently completed an $8.3 billion funding round at a $300 billion valuation, months ahead of schedule. The round was led by Dragoneer Investment Group, which invested $2.8 billion. Other new investors include Blackstone, TPG, and T. Rowe Price, while Andreessen Horowitz and Sequoia Capital returned. This raise is part of OpenAI's plan to secure $40 billion in 2025. The company raised $2.5 billion in March. OpenAI's annualized revenue is about $13 billion and could reach $20 billion by the end of the year if current trends hold. The funding is expected to support product development and expand its reach into more business and consumer markets. Growth Is Evident, But Costs and Competition Mount Feature updates are also playing a role in the growth. The ChatGPT Projects tool has been expanded with more research features, voice mode, and a memory system that can recall past conversations inside projects. These changes are making it easier for users to manage longer and more complex work inside the platform. OpenAI now counts more than 5 million paying business subscribers, up from 3 million in June. However, the company faces financial and competitive challenges as it expects to burn about $8 billion in cash by the end of 2025. Current projections suggest it will not reach cash-flow breakeven before 2029. Also, competition from Google DeepMind (GOOG), Meta (META), and Microsoft (MSFT) -backed AI projects is also increasing. Moreover, China-based companies are also threatening American companies' AI dominance, such as Alibaba's (BABA) Qwen 3, and DeepSeek's R1 model. In short, OpenAI's combination of rapid user growth, higher revenue, and heavy investment is impressive and even formidable. Still, retail investors will be watching to see if the company can sustain this pace while managing costs and defending its position in a fast-growing market. Using TipRanks' Comparison Tool, we've compared some of the notable companies that employ ChatGPT similar to OpenAI's ChatGPT.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store