logo
#

Latest news with #academicstudies

Why You Are Better Off Being Interviewed By AI Than A Human
Why You Are Better Off Being Interviewed By AI Than A Human

Forbes

time25-05-2025

  • Forbes

Why You Are Better Off Being Interviewed By AI Than A Human

Even in its current state, AI is more accurate and less biased than most human interviewers - but it is a low bar. Although it would be almost unthinkable for anyone to get any job without first being subjected to a job interview, the science on this is quite clear: there is very little valid and unique information the interview provides, which could not be obtained through other, more reliable and predictive means (e.g., science-based assessments, IQ and personality tests, past performance, and even AI-scraping of people's digital footprints). On top of that, there's too much irrelevant and unethical information the interview provides, even when humans are determined to ignore or pretend to ignore such data: candidates' socioeconomic status or social class, gender, ethnicity, and overall appearance and physical attractiveness (one of the strongest, most pervasive, yet rarely discussed biases). This is why employers and hiring managers feel it would be ludicrous to get rid of the interview: it provides them with so much information they are supposed to ignore but 'love' to have, even when they are unaware of their biases. Unsurprisingly, academic studies show that the typical interview, which is unstructured (more like an informal chat than a rigorous standardized interaction) and amateurish (conducted and evaluated by people with little or no technical training, and their own personal agenda or political interests), rarely accounts for more than 9% of the variability in future job performance. In fact, this is probably an overestimate: when candidates' hiring managers are in charge of both the selection interview, as well as candidates' subsequent performance ratings or evaluations or ratings once they are hired into the role, there's a great deal of invisible or informal bias in the model. For example, most managers will be reluctant to accept that they have chosen the wrong candidate, so the best way to camouflage or disguise their error is to rate their performance positively even when they are doing terribly (so it reflect well on their choices). Likewise, even when managers are unaware that they hired the wrong person (because the candidate has learned to impression manage them or 'fake good' once they are in the role), managers will still see them in a positive vein. That is, the same biases that positively distorted the manager's opinion during the interview will likely still be at play months or years later, especially if the manager is not great at evaluating the candidate's actual performance, and distracted by performative aspects of their job performance (pretending to work, playing politics, sucking up, and faking competence with charisma and confidence, and so on). Enter AI, which is far from perfect, especially when it comes to assessment and hiring, at least in its current state (one would expect it to improve like in most tasks and subject matter domains). That said, AI-algorithms have been successfully deployed and researched as candidate screening and selection tools for years, way before AI went mainstream with generative AI and chatGPT. Admittedly, to this date, AI's value in assessment, hiring, selection, and talent identification is mostly around improving speed, efficiency, consistency, and cost, rather than accuracy. In that sense, its main contribution is in high-volume pre-screening and screening, since quantity eventually boosts quality (if you can examine 1 million candidates you are more likely to end up with 10 incredible candidates than if you can examine only 1000), as well as relieving recruiters from low value, boring, routine standardized activities: finding key words in a CV or resume, sexing up job descriptions, or cold-messaging tons of candidates on LinkedIn - all of which AI does better. The more repetitive, boring, and low value activities are outsourced to AI, the more time human recruiters and hiring managers have to actual connect with candidates (on a human-to-human level). However, it is questionable whether some of this time should actually be devoted to human-to-human interviews, or whether the interview should just be left to AI, including algorithmic de-selection and shortlisting of candidates. Consider some of the key advantages of well-designed AI-based interviews, by which I mean fully automated, AI-scored digital or video interviews: (1) Unlike humans, AI is very good at focusing on relevant signals while ignoring irrelevant signals (including noise linked to social class, demographic status, and any information likely to decrease fairness and harm out-group candidates). (2) Unlike humans, AI is much better at learning and unlearning, so it can continue to improve and refine its models, making them more predictive (especially if higher quality performance data is used to fine-tune them). (3) Although AI can be biased, those biases typically reflect human biases (when AI is 'taught' what an alleged high performer looks like, it will simply replicate or emulated prejudiced and biased preferences coming from humans). In fact, AI can never be biased in a truly human way. That is, unlike humans, AI will never have a fragile self-esteem it needs to protect or inflate by bringing other humans (or AI) down. Maybe in the future we will see AI evolve into this kind of neurotic or insecure intelligence, but not right now. (4) AI can predict in a more predictable and reliable way, and offer the same treatment and evaluation to all candidates. This is never the case with humans, even when the same panel of human interviewers is trained to evaluate or assess the same group of people, and experimental conditions are put in place. Like in a jury, human interviewers have their own mood swings, preferences, biases, and most are hard to detect and manage. (5) While we often attack AI for being 'black box' (in the sense that even predictive models can be hard to understand or decipher), regulations have done a good job reducing the application of black box models, and most AI/algorithmic scoring tools in recruitment and interview screening are now 'white box' (in that you can reverse-engineer the models to understand why certain scores were assigned or decisions were made). (6) In contrast to AI, the human brain is truly a black box. Consider that even when human interviewers truly believe that one candidate is better than others, they will never know why they actually preferred them over others. They may have a story, including a story they tell themselves, but we will never know whether that story is true, or just some BS they told themselves (since every human is biased, and the mother of all biases is to be unaware of our biases, and treat them as facts when they are purely feelings about facts at best). (7) Many studies show that despite the low acceptance and popularity of AI, candidates often prefer AI to human interviewers, especially when they've had bad experiences with human interviewers. These can include (but are not limited to): micro-aggressions, 'macro' (overt) aggressions, discrimination, harassment, arrogant or unfriendly treatment, and sheer indifference. This explains why, even in its current state, AI is a better alternative to many human interviewers, even if we measure this purely in terms of candidate acceptance or user-experience. (8) If there truly is a pattern linking interview activity (verbal and non-verbal communication, language, speech, manners, social skills, and the content and style that make one candidate differ from others) with future job performance, there is no question that AI will be better able to spot and detect this pattern than humans are. Moreover, it will also do so in a more consistent, cheaper, and scalable way than humans. Note that even the most proficient human interviewer (perhaps a very experienced clinical psychologists with decades of training and experience) will have off days, feel up or down because of personal events, and be affected by conscious and unconscious biases. Moreover, it is not easy to find such experts and they can only manage a small number of interviews per day. To be sure, a bigger problem is that too many people think they are this expert when in fact they have no skills or expertise at interviewing whatsoever. (9) Meta-analytic data shows that the only interviews that consistently and substantially correlated with future job performance and significantly assess potential are highly structured and standardized: in other words, they look a lot more like psychometric assessments than a typical conversation or chat between a human interviewer and a candidate. Again, this makes the interview perfectly suitable for AI and computer intelligence rather than the unpredictability and erratic human personality. (10) To be sure, candidates gain a great deal from meeting their hiring manager, asking them questions, and getting a feel for their future bosses and the company culture (even if this, too, is a way to unleash or activate their own personal biases). There is no need to eliminate this, even if AI could do a better and more sincere job at describing a given culture (and soon also boss) to candidates! However, this human-to-human chat and opportunity for candidates to pick the right job, boss, and company, could be offered once candidates are already offered a job, and without any human intervention. If all of this sounds creepy or Orwellian, think about this analogy: AI-interviews may be the recruitment equivalent of self-driving cars. They may not be quite ready for mass-consumption and adoption, and are still producing crashes and errors during pilot mode (especially if there are other human drivers on the road); however, like self-driving cars or autonomous vehicles, they are unlikely to produce 1.3M human deaths per year, like human drivers do today. And just like drivers tend to think they are better than AI when they are not, most human interviewers think they are better than AI when they are not. AI does not need to be perfect to represent an improvement over human intelligence, and especially human stupidity.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store