
How AI Assessment Tools Affect Job Candidates' Behavior
Most companies, when considering AI assessment tools, focus on the gains the tools bring in terms of efficiency and quality. But what they don't factor in is how AI assessment may change candidates' behavior during the assessment. Our new research, examining over 13,000 participants across 12 studies, reveals that this is a crucial blind spot. We looked at simulation of a variety of assessment situations in both the laboratory and the field, and we collaborated with a startup platform offering game-based hiring solutions called Equalture.
The results show that job candidates consistently emphasized analytical traits when they believed AI was evaluating them, while downplaying the very human qualities—empathy, creativity, intuition—that often distinguish outstanding employees from merely competent ones. This drove candidates to present a different and potentially more homogeneous version of themselves, in turn affecting who was likely to succeed in an AI-enabled hiring process, with implications for organizations using AI in hiring, promotions, or admission decisions.
Why This Matters for Your Organization
The implications of our findings extend beyond individual hiring decisions. When candidates systematically misrepresent themselves, organizations face several critical challenges:
Talent pool distortion: While AI is sometimes blamed for making biased hiring decisions (for example, discriminating against women in the selection process), our research suggests that knowing that one is assessed by AI also biases candidates, making them believe that they should prioritize their analytical capabilities. As a result, companies may be screening out exactly the candidates they need simply by using AI: that innovative thinker or emotionally intelligent leader you're looking for might present themselves as a rule-following analyst because they believe that is what the AI wants to see.
Validity compromise: Assessment tools are only as good as the data they collect. When candidates strategically alter their responses, the fundamental validity of the assessment process might be undermined. Organizations may no longer measure authentic capabilities—instead, they may measure what candidates think AI will value the most.
Unintended homogenization: If most candidates believe AI favors analytical traits, the talent pipeline may become increasingly uniform, potentially undermining diversity initiatives and limiting the range of perspectives in organizations. Companies like IBM and Hilton, which integrate AI into both hiring and internal promotion systems, must now contend with whether such tools nudge employees toward formulaic self-presentation.
New transparency regulations like the EU's AI Act, which require organizations to disclose AI use in high-stakes decisions, make these outcomes all the more likely. When candidates are aware that an AI is assessing them, they are more likely to change their behavior.
What Leaders Can Do
Based on our findings, organizations can take several concrete steps to address the AI assessment effect:
Radical transparency: Do not just disclose AI assessment—be explicit about what it actually evaluates. Clearly communicate that your AI can and does value diverse traits, including creativity, emotional intelligence, and intuitive problem-solving. This might include providing examples of successful candidates who demonstrated strong intuitive or creative capabilities. Currently, few companies seem to be transparent about what exactly it is that AI assesses—at least this information is not easily accessible when clicking through career page information on the websites of many major companies. That said, applicants discuss and share their intuitions on blogs and videos, which may be counterproductive because it may or may not align with actual practices. We advise companies not to leave their candidates to speculate.
Regular behavioral audits: Implement systematic reviews of your AI assessment outcomes. For instance, New York City has enacted Local Law 144, requiring employers to conduct annual bias audits of AI-based hiring. In response, one of the market leaders in AI-based hiring, HireVue reports their recent audits for race or gender bias across jobs and use cases. In addition to examining biases regarding demographics, we suggest using these audits to look for patterns indicating behavioral adaptation: Are candidates' responses becoming more homogeneous over time? Are you seeing a shift toward analytical presentations at the expense of other valuable traits?
Hybrid assessment: Some organizations combine human and AI assessments. For example, Salesforce notes that besides technology, a human will review applications. Nvidia and Philip Morris International guarantee ultimate assessment and decision-making through a human. One of our studies shows that while this hybrid human assessment does reduce candidates' tendency to highlight analytical capabilities, it does not eliminate it. To close the gap, you need to train your human hirers to compensate for the AI effect.
The Path Forward
As AI becomes increasingly embedded in organizational decision-making, we must recognize that these tools do not just change processes—they change people. The efficiency gains from AI assessment may come at the cost of authentic candidate presentation and, ultimately, the human diversity that makes organizations innovative and resilient. The irony is striking: In our quest to remove human bias from hiring, we may have created a system where AI introduces a new form of bias. The solution is not to abandon AI, but to design assessment systems that account for and counteract these behavioral shifts. Only by keeping humans—not just metrics—at the heart of our assessment strategies can we build hiring systems that truly identify and nurture the diverse talent our organizations need.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
17 minutes ago
- Yahoo
Block AI Labs Empowers Startups with Affordable, AI-Driven Software Development from U.S. and Colombia
Leveraging Medellín's rising tech talent and bilingual support, Block AI Labs delivers smart solutions for global startups. AUSTIN, Texas, July 26, 2025 /PRNewswire/ -- Block AI Labs, founded by tech entrepreneurs Daniel and David Shnader, is transforming the way startups access high-quality software development by combining U.S. leadership with top-tier engineering talent in Medellín, Colombia. The company specializes in affordable, full-stack development and AI integration services, supporting emerging startups across the U.S., Canada, Australia, and other global tech leveraging Medellín's fast-growing digital nomad and IT community, Block AI Labs delivers powerful, scalable software solutions with bilingual (English and Spanish) support. This cross-border model ensures seamless collaboration, fast turnaround times, and access to global expertise — all at a startup-friendly price AI Labs is the driving force behind several innovative AI-based platforms, including: – AI voice assistant and chatbot automation tools – AI-powered local SEO and social media automation – Bookkeeping and tax return software for freelancers and solopreneurs – Fintech solution for automated credit card dispute resolution and identity verification With a focus on innovation, communication, and execution, Block AI Labs helps startups go from idea to launch faster, smarter, and more efficiently. The company's international footprint, bilingual team, and proven track record make it a trusted development partner for founders and innovators around the more information, visit Shnadersupport@ ID: View original content: SOURCE Block AI Labs Sign in to access your portfolio
Yahoo
41 minutes ago
- Yahoo
Here are 3 ways to think about Nvidia stock
Fortunes have been made by many, thanks to investing in chip giant Nvidia (NASDAQ: NVDA). Nvidia stock has soared 1,576% over the past five years. It is now the most valuable listed company in the world. I continue to weigh my options when it comes to investing. I would be happy to own Nvidia stock in my portfolio — but I am not willing to pay the current price. In making my decisions, I have been trying to think about the share from different perspectives. Here are three of them. Like Amazon before the dotcom crash Artificial intelligence (AI) has some signs of being a stock market bubble. If that bubble bursts, for example because computing power progress means future chip demand is much less than expected, it would likely have a big impact on Nvidia. That helps explain why I am nervous about buying at the current Nvidia stock price. If it falls down I would then be nursing a paper loss, perhaps a sizeable one. Then again, Amazon fell 94% between the dotcom boom of November 1999 and September 2001. Still, since then it has gone up 76,600%. As a long-term investor, I do not mind sitting on a paper loss (even a sizeable one) if I continue to believe in the long-term investment case for a share. But while Amazon in 1999 could be an interesting comparison for Nvidia stock today, there is no guarantee latter would bounce back the way the former did. Amazon's market grew significantly. The market for AI chips may keep growing fast – but it could also be that after initial installations are complete, demand falls. A bubble waiting to burst? That leads me onto another potential way to view Nvidia stock: as a massive bubble waiting to burst. After all, the price-to-earnings (P/E) ratio is 56. That is higher than I would be willing to pay, though large tech stocks often do command high P/E ratios. But earnings have exploded at Nvidia in recent years. Last year's basic earnings per share of $2.97 were far more than double the prior year's $1.21 – and around 25 times higher than just five years previously. If the surging demand for AI chips turns out to be a blip rather than a long-term trend, Nvidia's eanings could come crashing back to earth. In such a scenario, even if Nvidia remained solidly profitable, its stock price may move far below where it currently stands. This is the risk that most puts me off investing at the current share price. Success story set to grow A third scenario could be that Nvidia might be like Microsoft or Apple at multiple points in their history – massively successful yet set to grow further, boosting an already costly-looking share price. Apple stock is up 131% in the past five years. But five years ago, Apple was already massively successful and one of the biggest companies on the market. Nvidia's proprietary technology, large customer base and proven business model have brought it a long way in a few years. Maybe it can do the same again over the next few years. The post Here are 3 ways to think about Nvidia stock appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool C Ruane has no position in any of the shares mentioned. The Motley Fool UK has recommended Amazon, Apple, Microsoft, and Nvidia. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025


The Hill
an hour ago
- The Hill
Tech companies building massive AI data centers should pay to power them
The projected growth in artificial intelligence and its unprecedented demand for electricity to power enormous data centers present a serious challenge to the financial and technical capacity of the U.S. utility system. Appreciation for the sheer magnitude of that challenge has gotten lost as forecast after forecast projects massive growth in electric demand over the coming decade. The idea of building a data center that will draw 1 gigawatt of power or more, an amount sufficient to serve over 875,000 homes, is in the plans of so many data center developers and so routinely discussed that it no longer seems extraordinary. The challenge, when viewed in the aggregate, may be overwhelming. A recent Wood Mackenzie report identified 64 gigawatts of confirmed data center related power projects currently on the books with another 132 gigawatts potentially to be developed. 64 gigawatts are enough to power 56 million homes — more than twice the population of the 15 largest cities in America. The U.S. electric utility system is struggling to meet the projected energy needs of the AI industry. The problem is that many utilities do not have the financial and organizational resources to build new generating and transmission facilities at the scale and on the data center developers' desired timeline. The public policy question now on the table is who should pay for and bear the risk for these massive mega-energy projects. Will it be the AI developers such as Amazon, Microsoft, Meta and Alphabet — whose combined market value is seven times that of the entire S&P 500 Utility Sector — or the residential and other customers of local electric utilities? The process to answer this and related questions is underway in the hallways of the U.S. Congress, at the Federal Energy Regulatory Commission and other federal agencies, in tariff proceedings before state regulatory authorities and in public debate at the national, state and local levels. Whether they are developed at the federal, state or local level, the following values and objectives should form the core of public policy in this area: Data centers developers that require massive amounts of electric power (e.g. above 500MW or another specified level) should be required to pay for building new generating and transmission facilities. The State of Texas recently enacted legislation that requires data centers and other new large users to fund the infrastructure necessary to serve their needs. Although it is customary to spread the cost of new facilities across the user base of a utility, the demands that data center developers are placing on utility systems across the country are sufficiently extraordinary to justify allocating the costs of new facilities to those developers. Moreover, data center developers have the financial resources to cover those costs and incorporate them into the rates charged to users of their AI services. The developers of large data centers should bear the risk associated with new utility-built generating and transmission facilities, not the utility. As an example of such a policy, the Public Utility Commission of Ohio just approved a compromise proposed by American Electric Power of Ohio that would require data centers with loads greater than 1 gigawatt and mobile data centers over 25 megawatts to commit to 10-year electric service contracts and pay minimum demand charges based on 85 percent of their contract capacity, up from 60 percent under the utility's current general service tariff. Another option included in the Texas legislation requires significant up-front payments early in the planning process and mandates that data center developers disclose where they may have simultaneously placed demands for power. It is not unusual for data center requests for service to be withdrawn once they decide on the best location and package of incentives. Data center developers have the financial capacity and ability to manage this risk, utilities do not. Generating facilities that are co-located at large data centers should be integrated with the local utility electric grid, with appropriate cost allocation. Although a few projects have examined the option of a co-located power generation 'island' fully independent of the grid, most projects intend to interconnect with the grid system for back-up power and related purposes. Properly managed, this interconnection could be advantageous for both the data center and the utility system, provided that costs are appropriately allocated across the system. The U.S. government should continue to support the development of nuclear technology, including small modular reactors. U.S. utilities do not have the financial resources to assume the risk of building new nuclear-powered generating facilities. The emergence of a new set of customers, data center developers with enormous needs for electric power and deep pockets, changes the equation. The U.S. government has provided billions of dollars of support for new nuclear technologies and should continue to do so for the purpose of bringing their costs down. The U.S. government should continue to support energy efficiency improvements at data centers. Data centers use massive amounts of power for running servers, cooling systems, storage systems, networking equipment, backup systems, security systems and lighting. The National Renewable Energy Laboratory has developed a 'handbook' of measures that data centers can implement to reduce energy usage and achieve savings. In addition, there now are strong market forces to develop new super-efficient chips that will lower the unit costs of training and using AI models. The U.S. government should help accelerate the development of these chips given their leverage on U.S. electricity demand. The stakes in this public policy debate over our energy future could not be higher. If we get these policies right, AI has the potential to remake the U.S. economy and the energy infrastructure of this country. If we get it wrong, the push to build new generating and transmission facilities to provide gigawatts of power has the potential to overwhelm the financial and operational capacity our electric utility system, impose burdensome rate increases on homeowners and businesses, undercut efforts to reduce the use of fossil fuels to meet climate-related goals and compromise the reliability of our electricity grid for years to come. David M. Klaus is a consultant on energy issues who served as deputy undersecretary of the U.S. Department of Energy during the Obama administration and as a political appointee to two other Democratic presidents. Mark MacCarthy is the author of 'Regulating Digital Industries' (Brookings, 2023), an adjunct professor at Georgetown University's Communication, Culture & Technology Program, a nonresident senior fellow at the Institute for Technology Law and Policy at Georgetown Law and a nonresident senior fellow at the Brookings Institution.