
Why You Are Better Off Being Interviewed By AI Than A Human
Even in its current state, AI is more accurate and less biased than most human interviewers - but it is a low bar.
Although it would be almost unthinkable for anyone to get any job without first being subjected to a job interview, the science on this is quite clear: there is very little valid and unique information the interview provides, which could not be obtained through other, more reliable and predictive means (e.g., science-based assessments, IQ and personality tests, past performance, and even AI-scraping of people's digital footprints).
On top of that, there's too much irrelevant and unethical information the interview provides, even when humans are determined to ignore or pretend to ignore such data: candidates' socioeconomic status or social class, gender, ethnicity, and overall appearance and physical attractiveness (one of the strongest, most pervasive, yet rarely discussed biases).
This is why employers and hiring managers feel it would be ludicrous to get rid of the interview: it provides them with so much information they are supposed to ignore but 'love' to have, even when they are unaware of their biases.
Unsurprisingly, academic studies show that the typical interview, which is unstructured (more like an informal chat than a rigorous standardized interaction) and amateurish (conducted and evaluated by people with little or no technical training, and their own personal agenda or political interests), rarely accounts for more than 9% of the variability in future job performance. In fact, this is probably an overestimate: when candidates' hiring managers are in charge of both the selection interview, as well as candidates' subsequent performance ratings or evaluations or ratings once they are hired into the role, there's a great deal of invisible or informal bias in the model.
For example, most managers will be reluctant to accept that they have chosen the wrong candidate, so the best way to camouflage or disguise their error is to rate their performance positively even when they are doing terribly (so it reflect well on their choices). Likewise, even when managers are unaware that they hired the wrong person (because the candidate has learned to impression manage them or 'fake good' once they are in the role), managers will still see them in a positive vein. That is, the same biases that positively distorted the manager's opinion during the interview will likely still be at play months or years later, especially if the manager is not great at evaluating the candidate's actual performance, and distracted by performative aspects of their job performance (pretending to work, playing politics, sucking up, and faking competence with charisma and confidence, and so on).
Enter AI, which is far from perfect, especially when it comes to assessment and hiring, at least in its current state (one would expect it to improve like in most tasks and subject matter domains). That said, AI-algorithms have been successfully deployed and researched as candidate screening and selection tools for years, way before AI went mainstream with generative AI and chatGPT.
Admittedly, to this date, AI's value in assessment, hiring, selection, and talent identification is mostly around improving speed, efficiency, consistency, and cost, rather than accuracy. In that sense, its main contribution is in high-volume pre-screening and screening, since quantity eventually boosts quality (if you can examine 1 million candidates you are more likely to end up with 10 incredible candidates than if you can examine only 1000), as well as relieving recruiters from low value, boring, routine standardized activities: finding key words in a CV or resume, sexing up job descriptions, or cold-messaging tons of candidates on LinkedIn - all of which AI does better.
The more repetitive, boring, and low value activities are outsourced to AI, the more time human recruiters and hiring managers have to actual connect with candidates (on a human-to-human level). However, it is questionable whether some of this time should actually be devoted to human-to-human interviews, or whether the interview should just be left to AI, including algorithmic de-selection and shortlisting of candidates. Consider some of the key advantages of well-designed AI-based interviews, by which I mean fully automated, AI-scored digital or video interviews:
(1) Unlike humans, AI is very good at focusing on relevant signals while ignoring irrelevant signals (including noise linked to social class, demographic status, and any information likely to decrease fairness and harm out-group candidates).
(2) Unlike humans, AI is much better at learning and unlearning, so it can continue to improve and refine its models, making them more predictive (especially if higher quality performance data is used to fine-tune them).
(3) Although AI can be biased, those biases typically reflect human biases (when AI is 'taught' what an alleged high performer looks like, it will simply replicate or emulated prejudiced and biased preferences coming from humans). In fact, AI can never be biased in a truly human way. That is, unlike humans, AI will never have a fragile self-esteem it needs to protect or inflate by bringing other humans (or AI) down. Maybe in the future we will see AI evolve into this kind of neurotic or insecure intelligence, but not right now.
(4) AI can predict in a more predictable and reliable way, and offer the same treatment and evaluation to all candidates. This is never the case with humans, even when the same panel of human interviewers is trained to evaluate or assess the same group of people, and experimental conditions are put in place. Like in a jury, human interviewers have their own mood swings, preferences, biases, and most are hard to detect and manage.
(5) While we often attack AI for being 'black box' (in the sense that even predictive models can be hard to understand or decipher), regulations have done a good job reducing the application of black box models, and most AI/algorithmic scoring tools in recruitment and interview screening are now 'white box' (in that you can reverse-engineer the models to understand why certain scores were assigned or decisions were made).
(6) In contrast to AI, the human brain is truly a black box. Consider that even when human interviewers truly believe that one candidate is better than others, they will never know why they actually preferred them over others. They may have a story, including a story they tell themselves, but we will never know whether that story is true, or just some BS they told themselves (since every human is biased, and the mother of all biases is to be unaware of our biases, and treat them as facts when they are purely feelings about facts at best).
(7) Many studies show that despite the low acceptance and popularity of AI, candidates often prefer AI to human interviewers, especially when they've had bad experiences with human interviewers. These can include (but are not limited to): micro-aggressions, 'macro' (overt) aggressions, discrimination, harassment, arrogant or unfriendly treatment, and sheer indifference. This explains why, even in its current state, AI is a better alternative to many human interviewers, even if we measure this purely in terms of candidate acceptance or user-experience.
(8) If there truly is a pattern linking interview activity (verbal and non-verbal communication, language, speech, manners, social skills, and the content and style that make one candidate differ from others) with future job performance, there is no question that AI will be better able to spot and detect this pattern than humans are. Moreover, it will also do so in a more consistent, cheaper, and scalable way than humans. Note that even the most proficient human interviewer (perhaps a very experienced clinical psychologists with decades of training and experience) will have off days, feel up or down because of personal events, and be affected by conscious and unconscious biases. Moreover, it is not easy to find such experts and they can only manage a small number of interviews per day. To be sure, a bigger problem is that too many people think they are this expert when in fact they have no skills or expertise at interviewing whatsoever.
(9) Meta-analytic data shows that the only interviews that consistently and substantially correlated with future job performance and significantly assess potential are highly structured and standardized: in other words, they look a lot more like psychometric assessments than a typical conversation or chat between a human interviewer and a candidate. Again, this makes the interview perfectly suitable for AI and computer intelligence rather than the unpredictability and erratic human personality.
(10) To be sure, candidates gain a great deal from meeting their hiring manager, asking them questions, and getting a feel for their future bosses and the company culture (even if this, too, is a way to unleash or activate their own personal biases). There is no need to eliminate this, even if AI could do a better and more sincere job at describing a given culture (and soon also boss) to candidates! However, this human-to-human chat and opportunity for candidates to pick the right job, boss, and company, could be offered once candidates are already offered a job, and without any human intervention.
If all of this sounds creepy or Orwellian, think about this analogy: AI-interviews may be the recruitment equivalent of self-driving cars. They may not be quite ready for mass-consumption and adoption, and are still producing crashes and errors during pilot mode (especially if there are other human drivers on the road); however, like self-driving cars or autonomous vehicles, they are unlikely to produce 1.3M human deaths per year, like human drivers do today. And just like drivers tend to think they are better than AI when they are not, most human interviewers think they are better than AI when they are not. AI does not need to be perfect to represent an improvement over human intelligence, and especially human stupidity.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
Pennsylvania bill looks to prohibit use of AI in political campaigns
PENNSYLVANIA (WTAJ) — A Pennsylvania Representative announced legislation that aims to prohibit the misuse of Artificial Intelligence (AI) in political campaigns. The bill, authored by Rep. Tarik Khan (D-Philadelphia), argues that while AI has the potential to make content creation 'more efficient,' it also equally has the power to spread disinformation — propaganda intended to mislead against a rival or media. Khan noted that as AI has become more widely available, 'bad actors' have used the programming to produce content targeting political candidates, public officials and government programs. The Federal Election Commission has considered a proposal to limit AI-generated political content, but it is currently uncertain whether regulatory action will be taken or not. Khan argued that while that decision is being made, Pennsylvania must join the 14 other states that have adopted laws or resolutions related to AI. 'Voters need accurate information on political candidates to make the best and most informed decisions for their families and communities,' Khan wrote. 'Please join us in protecting Pennsylvania voters against deception and manipulation by signing on to this proposal.' Under the legislation, the dissemination of a campaign advertisement containing an artificially generated impersonation of a candidate would be prohibited if done without consent and with the intent to influence the election outcome. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Yahoo
15 minutes ago
- Yahoo
FDA approves first AI tool to predict breast cancer risk
The U.S. Food and Drug Administration (FDA) has approved the first artificial intelligence (AI) tool to predict breast cancer risk. The authorization was confirmed by digital health tech company Clairity, the developer of Clairity Breast – a novel, image-based prognostic platform designed to predict five-year breast cancer risk from a routine screening mammogram. In a press release, Clairity shared its plans to launch the AI platform across health systems through 2025. Ai Detects Ovarian Cancer Better Than Human Experts In New Study Most risk assessment models for breast cancer rely heavily on age and family history, according to Clairity. However, about 85% of cases occur in women who have no family history of breast cancer, likely stemming from genetic mutations that occur because of aging, health agencies report. Read On The Fox News App Traditional risk models have also been built on data from predominantly European Caucasian women, which Clairity said has not been "generalized well" to diverse backgrounds. The AI tool analyzes subtle images from a screening mammogram that correlate with breast cancer risk, then generates a "validated five-year risk score" and delivers it to healthcare providers, the company noted. Ai Detects Woman's Breast Cancer After Routine Screening Missed It: 'Deeply Grateful' Dr. Connie Lehman, Clairity founder and breast imaging specialist at Mass General Brigham, stressed the importance of mammograms in early cancer detection. "Now, advancements in AI and computer vision can uncover hidden clues in the mammograms – invisible to the human eye – to help predict future risk," she said in a press release. "By delivering validated, equitable risk assessments, we can help expand access to life-saving early detection and prevention for women everywhere." Dr. Robert A. Smith, senior vice president of early cancer detection science at the American Cancer Society, also commented in a statement that personalized, risk-based screening is "critical to improving breast cancer outcomes, and AI tools offer us the best opportunity to fulfill that potential." "Clairity's FDA authorization is a turning point for more women to access the scientific advances of AI-driven cancer risk prediction," Larry Norton, founding scientific director of the Breast Cancer Research Foundation, wrote in another statement. "Breast cancer is rising, especially among younger women, yet most risk models often miss those who will develop the disease," he said. "Now we can ensure more women get the right care at the right time." More than 2.3 million women are diagnosed with breast cancer globally each year, including more than 370,000 in the U.S., despite "decades of progress," according to the Breast Cancer Research Foundation. Cases have particularly been on the rise among younger women under the age of 50. Click Here To Sign Up For Our Health Newsletter In a Tuesday appearance on "America's Newsroom," Fox News senior medical analyst Dr. Marc Siegel called Clairity's development "profound." "Just looking at a mammogram … sometimes [radiologists] will see things that aren't clear, they have to follow it over time," he said. "AI improves how focused and how predictive it is, [shown] very dramatically in studies." Siegel confirmed that radiologists across the country are generally in support of leveraging AI for cancer detection, especially in areas of the country that are "underserved" in terms of healthcare. For more Health articles, visit "In areas where you're relying on radiologists without special training, this is even more important," he said. "This is the wave of the future. AI is going to be part of the equation, but it's not going to take over."Original article source: FDA approves first AI tool to predict breast cancer risk


Vogue
16 minutes ago
- Vogue
Just How Accurate a Portrait of the Tech-Bro World is ‘Mountainhead'?
It can be a lot of fun to see a world you know well get lampooned, which is maybe why I felt ever-so-slightly out-of-the-loop while watching Mountainhead. Jesse Armstrong's feature directorial debut stars Steve Carell, Jason Schwartzman, Ramy Youssef, and Cory Michael Smith as the world's most powerful and annoying tech-bro clique convening for a weekend retreat in the midst of roiling, worldwide chaos that one of them may or may not be responsible for (and that more than a few of them feel uniquely capable of quelling). I've never worked in tech—except for a few months temping at a search-engine giant that shall remain nameless, from which I stole a ton of pens before unceremoniously quitting—so it was hard for me to tell whether the rapidly paced, Succession-esque dialogue of Mountainhead bore any relationship to the real thing (especially since nobody who was permanently employed at said company spoke to me at my temp job). So, I turned to some anonymous current and former big-tech employees for their thoughts on how the film reflected their industry. Below, find analyses of Mountainhead from some of the people who would know best: Anonymous tech-world veteran and startup founder: 'Mountainhead felt like somebody had listened to a LOT of All-In podcasts in the creation process. I think it over-rotated on the bro-ness of the guys. Tech moguls may do bro-y stuff, but they don't have bro-y personalities. They're still more nerdy than they are bro-y. The 'sure, we can run the world' casual confidence is more common amongst venture capitalists than people who actually operate companies.' Cory Michael Smith (as Venis), Steve Carell (as Randall), Ramy Youssef (as Jeff), and Jason Schwartzman (as Hugo Van Yalk, aka Souper) in Mountainhead. Photo: Courtesy of HBO Anonymous journalist who has covered tech: 'This whole 'We can marry a Facebook-like company to AI and it will become superhuman overnight' thing is ridiculous. Naturally, so is killing the head of the AI company. Tech has too much power, but AI is going to be at least as good as it is bad (for a while, at least). Everyone in the movie is either a doomer or a tech boomer; it's silly.' Anonymous big-tech alum: 'Was Jason Schwartzman gay? He should have been. Visibility matters! Ramy's look was the best because it was the most schlubby. That hit. But someone should have looked absurdly bad, if you asked me, i.e. Sam Altman's Henleys or Mark Zuckerberg's current Eastern Bloc-drug-dealer phase. The chat was so slick from the beginning and, in my experience, these guys have a lot more awkward pauses. I would have had more security, both in person (Elon has more security than Trump!) and on their phones. The venue was right, in that rich techies love an expansive breakfast bar that no one eats, so I was happy to see that. The decor was also right; modern-ish and quiet luxury-ish, but not, like, mid-century-modern chic. I also liked that the chief of staff was a woman old enough to be Cory's mom (that's the Sheryl Sandberg nod).'