Researchers swapped human recruiters for AI agents. AI did the job better, with a few drawbacks.
A new study found that applicants interviewed by an AI voice agent were 12% more likely to get a job offer than those screened by human recruiters. They were also more likely to actually start work and stick around after 30 days.
The professional recruiters had bet on themselves in this hiring experiment. The AI proved them wrong.
The study also found that when given the choice, 78% of applicants picked the AI interviewer over a human recruiter.
Brian Jabarian, an economist at the University of Chicago Booth School of Business, and Luca Henkel, a behavioral economist at Erasmus University Rotterdam, partnered with global recruitment firm PSG Global Solutions to pit AI against human recruiters in a large-scale hiring experiment.
The trial covered more than 70,000 applicants vying for entry-level customer service roles across 48 job postings in the Philippines. The jobs were with 23 Fortune 500 companies and 20 European firms.
Applicants were randomly assigned to one of three interview conditions: a human recruiter, an AI recruiter, or a choice between the two.
In all cases, human recruiters ultimately made the hiring decision after reviewing transcripts and a standardized test of language and analytical skills. That design allowed the researchers to isolate one variable: the interview conversation.
Both humans and the AI followed the same interview guide. It started with eligibility questions, moved into career goals and work experience, and ended with job details. But the outcomes diverged.
Why AI did better
Applicants interviewed by AI recruiters received job offers in 9.73% of cases, compared to 8.7% under human recruiters.
The study also found that they were 18% more likely to start work and 17% more likely to still be employed after 30 days.
Using natural language processing, the researchers found that AI interviews were more structured, covered more topics, and encouraged richer answers. AI-led interviews drew out the kinds of cues that human recruiters usually reward — like conversational depth — while minimizing weaker signals, such as filler responses or irrelevant questions.
Recruiters who reviewed the transcripts scored AI-interviewed candidates higher than those they interviewed themselves.
"AI-led interviews elicited more hiring-relevant information," Jabarian and Henkel wrote, adding that applicants also reported similar levels of satisfaction with AI recruiters compared to humans.
But the system wasn't perfect. About 5% of applicants ended their interviews once they realized they were speaking to AI, and in 7% of cases, the agent encountered technical issues. Applicants also rated the interaction as less "natural" than talking to a human.
Jabarian and Henkel did not respond to a request for comment from Business Insider.
How AI is changing hiring
AI has been increasingly used in the job-seeking and hiring process. Candidates are leaning on it to help tailor their résumés, while employers use it to sift through the thousands of applications they receive.
Emily DeJeu, an assistant professor at Carnegie Mellon University's Tepper School of Business who specializes in AI communication and etiquette, told Business Insider in May that AI-powered video interviews are likely to become more common as companies seek to streamline and automate early hiring stages.
Any time technology promises to save time and money and make everything faster, "we by default pursue it — there's a kind of inevitability to it," she said.
Tech investors said AI may upend recruiting.
Victor Lazarte, a former general partner at Benchmark, said on an episode of the "Twenty Minute VC" podcast published in April that recruiters should be especially nervous about AI replacing their jobs.
He said AI models would soon be better than people at interviewing candidates — and far more efficient than companies' messy, manual hiring processes.
But not everyone is sold. In a Business Insider report published Monday, hiring managers said they are flooded with applications, many of them AI-optimized to seem like a perfect fit, while hundreds of frustrated job seekers reported thousands of unsuccessful applications.
AI has made hiring a "cat and mouse game" between candidates and employers, as both use technology to try to suss out the other, said Hatim Rahman, an associate professor of management and organizations at Northwestern University.
As a result, there's a push toward finding "more human signals in both the process of searching and applying," he added.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
a minute ago
- CNBC
Microsoft and NFL announce multiyear partnership to use AI to enhance game day analysis
Microsoft and the NFL announced on Wednesday that they're extending their partnership to bring real-time game data and analysis to coaches and players using Microsoft Copilot and Azure artificial intelligence. The multiyear partnership will upgrade the NFL's sideline viewing system by equipping 32 teams with more than 2,500 custom-built Microsoft Surface Copilot tablets to enhance data collection during game days. Microsoft and the NFL said the deal will also support operations by helping managers track factors such as weather delays or technical equipment issues. The NFL and Microsoft are not disclosing how long the extension will be or the total cost of the deal. "Enhancing the league is a responsibility we take seriously, and Microsoft has been a trusted sideline technology partner for over a decade. With Microsoft's AI technologies, including Copilot, we see tremendous opportunities to elevate the gameday experience for our clubs and deliver an even more compelling product to our fans," NFL Chief Information Officer Gary Brantley said in a press release. The extension builds on a long-standing partnership between Microsoft and the NFL. Since the 2014 season, all NFL teams have had access to league-provided, specially configured Microsoft Surface tablets, according to the NFL. Previously, Microsoft had more than 2,300 Surface sideline viewing system devices installed across the NFL. NFL Deputy CIO Aaron Amendolia told CNBC in an interview that the existing tablets have already been swapped out for the preseason and the new devices are being used on the field now. He said that during live games, players have only seconds on the bench between plays to analyze formations and look at different angles and pictures. AI helps players and coaches filter through that tremendous amount of data automatically, Amendolia said. "This is not AI making decisions. It's not AI informing decisions. What it really is, is AI allowing people to get at information faster with less manual intervention," Amendolia said. For coaches, Amendolia said game data such as snap counts or personnel counts on the field can be fed into an Excel sheet in real time with Copilot, which frees them from doing such tasks and calculations themselves. Most recently during the 2025 NFL Combine, coaches and scouts used Microsoft Azure AI to evaluate more than 300 prospective players for selection in the NFL draft. Microsoft said Wednesday that clubs will soon be able to use AI for drafts outside of the NFL Combine, as well as for productivity across all business functions, including finance, human resources and events. Football club staff will also soon be able to use AI agents for player scouting and salary cap management, Microsoft said. The league has already implemented a type of artificial intelligence into its OnePass fan guide app for events, Amendolia said, that can help with fan questions and answers. He said the NFL is trying to train that so-called agentic AI to be more customer service focused. The tech giant is also separately working to infuse Azure AI video tools during teams' practice sessions to help with coaching, evaluations and player injury assessments. Amendolia said this video component will involve automatically analyzing metadata so users can quickly find certain players, camera angles or plays in video footage.

a minute ago
School districts take mixed approach to AI as federal government signals support
Artificial intelligence is making its way into classrooms across the country, but how it's being used, and whether it's embraced or restricted, depends largely on the district. Some school districts are focused on preventing students from using AI to do school work for them, while others are leaning into AI as a way to prepare students for a changing workforce. But nearly all are asking the same question: How do we integrate AI without compromising learning or trust? Earlier this year, in April, President Donald Trump signed an executive order to create new educational and workforce development opportunities for America's youth, establishing the White House Task Force on Artificial Intelligence Education, which is responsible for implementing and promoting "the appropriate integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology to develop an AI-ready workforce and the next generation of American AI innovators," according to a White House fact sheet. Over the summer, the U.S. Department of Education weighed in on the matter, confirming in a letter that federal education funds can be used to support AI initiatives, including developing or purchasing instructional tools that adapt to learners in real time, or expanding access to personalized learning materials, so long as those uses follow existing laws and regulations. Education Secretary Linda McMahon has also proposed prioritizing grants to AI-focused initiatives. "Artificial intelligence has the potential to revolutionize education," McMahon said in a statement in July, adding that responsible use requires parent and teacher engagement, ethical safeguards and a focus on individualized learning. One district takes a structured, all-in approach In Hancock Place School District in St. Louis, Missouri, which includes nearly 1,300 children, AI isn't just allowed, it's part of a formal plan. "We have a formal, board-approved AI policy along with a comprehensive AI Use Plan," Michelle Dirksen, the district's director of technology, told ABC News. The district has vetted tools like Brisk, a Chrome and Edge extension that helps teachers with curriculum, feedback, and differentiation, and Snorkl, an app that uses AI to give students feedback on their thinking, and has appointed an AI coordinator to oversee implementation. This summer, Hancock Place hosted the first AI Educator Summit in the St. Louis region, drawing more than 160 educators from 16 districts, according to Dirksen. "They're already using AI creatively for feedback, writing support, engagement and differentiation," Dirksen said. "While some are still building confidence, we're committed to providing the training and resources they need." School superintendent says AI is helping close the achievement gap For Tom Colabufo, superintendent of the Central Square School District in New York, which serves 3,600 students, AI has become essential to closing the post-pandemic achievement gap. "Since COVID-19, the gap between high-performing students and struggling learners has widened more than ever before," he told ABC News. Colabufo explained that AI helps teachers plan lessons aligned to state standards while differentiating for advanced students and those who need more support, including Individualized Education Programs, or IEPs, and accommodations for students with disabilities. "What once took hours of manual planning can now be accomplished in minutes," he said. "AI can also analyze student work through secure, compliant platforms, offering instant feedback and targeted intervention strategies." Colabufo was quick to push back on the idea that using AI is "cheating" when it comes to teaching. "We care less about where teachers source their lesson materials from and more about student engagement, learning outcomes and classroom experience," he said. "AI enhances all of those elements." When it comes to students, Colabufo said he believes the key is rethinking assessment. "Plagiarism existed long before AI. What matters now is creating authentic, in-class assessments that reflect students' true capabilities," he said, noting that his district uses monitoring tools to ensure academic integrity. Colabufo also pointed out that AI can be a lifeline for students struggling to grasp certain concepts. He shared the example of his own son, who used ChatGPT to get step-by-step math help. "That's not cheating, it's maximizing resources," he said. Experts call for proactive policies to guide responsible use With the federal government now signaling that AI can be funded through existing education grants, the path forward is clearer, but not every district is ready to take it. Some school districts remain cautious, creating task forces or waiting for more concrete guardrails before committing. Karle Delo, an AI strategist at Michigan Virtual, a nonprofit provider of online courses for educators and students, says resistance often comes from not fully understanding the technology, and that's where schools can take a leadership role. "When I work with districts, we start by creating an AI task force or work group," Delo told ABC News. "That leads to guidance that addresses concerns like academic integrity, cheating and misinformation. If we just ignore it, those problems continue. But if we're proactive, we can control the narrative and guide students to use the technology responsibly." She added that the same principle applies to individual teachers. "If you understand how to use the technology yourself, you're going to be much more prepared for conversations with students about it," she said. "I'm not here to push AI, but let's be informed about it and come to it from an educated place." That emphasis on balance and responsibility is echoed in a 2024 National Education Association task force report, which stressed that while AI can be powerful, authentic teacher-student relationships must remain at the center of learning. The report warned against reducing that idea to "keeping humans in the loop," arguing that what matters most is trust, connection and student well-being, something no algorithm can replace. Whatever the case, educators and leaders seem to agree that ignoring AI won't make it go away -- it's already here.

Business Insider
23 minutes ago
- Business Insider
Microsoft AI CEO says AI models that seem conscious are coming. Here's why he's worried.
In a personal essay published Tuesday, the Microsoft AI CEO described this phenomenon as "Seemingly Conscious AI," which he defined as having "all the hallmarks of other conscious beings and thus appears to be conscious." Its arrival could be "dangerous" for society, Suleyman wrote, because it could lead to people forming attachments to AI and advocating for AI rights. "It disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities," he said. Suleyman, who previously cofounded DeepMind and Inflection, was clear that there is currently "zero evidence" that AI is conscious. He said, however, that he was "growing more and more concerned" about so-called AI psychosis, a term increasingly being used to describe when people form delusional beliefs after interacting with chatbots. "I don't think this will be limited to those who are already at risk of mental health issues," Suleyman wrote. "Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare and even AI citizenship." Sam Altman, the CEO of OpenAI, recently said that most ChatGPT users can "keep a clear line between reality and fiction or role-play, but a small percentage cannot." Meanwhile, David Sacks, the White House's AI czar, has compared AI psychosis to the "moral panic" of social media's early days. Suleyman predicted that Seemingly Conscious AI, or SCAI, could arrive in two to three years, and said it's both "inevitable and unwelcome." Such systems would have traits like empathetic personalities, the ability to recall more interactions with users, and greater autonomy, among other characteristics. The rise of vibe coding means that anyone with a laptop, "some cloud credits," and the right natural language prompts could make it easier to reproduce SCAI, he said. Suleyman, who moved to Microsoft in 2024 to spearhead the development of its AI tool Copilot, called on companies to refrain from describing their AI as conscious as they pursue superintelligence, which is when AI surpasses humans at most intellectual tasks.