Latest news with #RembrandKoning


Fast Company
14-07-2025
- Business
- Fast Company
Women are slower to adopt AI at work. Here's why
As a leader in technology for nearly 30 years, I have observed waves of innovation disrupt the global business landscape and trigger major shifts in the way we work. Now, as AI takes its place as the next big thing, the global workforce is facing an overwhelming demand for new skills and capabilities. In my new book, Artificial Intelligence For Business, I highlight the impact of AI on the future of work, specifically the skills gaps and job displacements, as well as future essential skills required in global organizations. Interestingly, there is a cautious instinct at play, specifically for women at work, as they weigh the promise of innovation with the risks of AI application. This hesitation may be deterring women from using AI at work, as worries about embracing AI could undermine their credibility or even invite harsher judgement, instead of highlighting their true potential. According to recent research conducted by Harvard Business School Associate Professor Rembrand Koning, women are adopting AI tools at a 25% lower rate than men, on average. Synthesizing data from 18 studies that cover over 140,000 individuals worldwide, combined with estimates of the gender share of the hundreds of millions of users of popular generative AI platforms, the research demonstrates the gender gap holds across all regions, sectors, and occupations. Although the study highlights that closing this gap is crucial for business and economic growth, and development of AI-based technologies that avoid bias, the reasons for the gap existing in the first place needs to be explored further. Let's unpack several ethical, reputational, and systemic hurdles that may lead women to be more reluctant to use AI at work and explore how companies can help bridge this gap. Ethical concerns First, ethical concerns of AI adoption tend to weigh heavily on women's minds. Studies indicate that women consistently rate hesitation about AI technology adoption higher than men do, placing greater weight on ethics, transparency, accountability, explainability, and fairness when evaluating AI tools. In one study that examines public perceptions of AI fairness across three societal U.S.-based contexts, personal life, work life, and public life, women consistently perceived AI as less beneficial and more harmful across all contexts. This caution may be evident as women hold themselves, and their teams, to strong ethical standards. These concerns are amplified by the rapid increase in 'black box' AI tools adoption across key business decision points, where the inner workings are opaque and hidden behind proprietary algorithms. As more female ethicists and policy experts enter the global field, they raise high-impact questions about bias, data privacy, and harmful consequences, feeling a special responsibility to get answers before signing off on innovative technology solutions. Women all over the world watched in dismay as leading AI ethicists were penalized for raising valid concerns over ethical development and use of AI. Famously, Timnit Gebru, co-lead of Google's Ethical AI team, was forced out after pushing back on orders to withdraw her paper on the social risks of large language models. Subsequently, Margaret Mitchell was also fired while standing in solidarity with Gebru and raising similar concerns. This move, among others, has sent a stark message that calling out potential harm in AI could make you a target. Extra scrutiny Alongside ethics, there is may be a fear of being judged at work for leaning on AI tools. In my experience, women often face extra scrutiny over their skills, capabilities, and technical prowess. There may be a deep-rooted concern that leveraging AI tools may be perceived as cutting corners or reflect poorly on the users' skill level. That reputational risk may be magnified when flaws or issues in the AI outputs are attributed to the user's lack of competence or expertise. Layer onto this a host of ongoing systemic challenges inherent in the business environment and AI tools that are implemented. For example, training data can under-represent the experiences of women in the workplace and reinforce the perception that AI products were not built for them. Nondiverse AI teams also pose as a deterrent, creating additional barriers to participate and engage. The consequence of the gender gap in AI is more than a discomfort. It can result in AI systems that reinforce gender stereotypes and ignore inequities, issues that are augmented when AI tools are applied for decision-making across essential areas such as hiring, performance reviews, and career development. For example, a recruitment tool trained on historical data may limit female candidates from leadership roles, not due to lack of capabilities, but because historically there have been more male leaders. Blind spots like these further deepen the very gap that organizations are trying to close. To counter this and encourage more women to use AI at work, organizations should start by creating an environment that balances guardrails with exploration. Additionally, they should build psychological safety by encouraging dialogue that gives space for concerns, challenges, and feedback, without fears of being penalized. Open and transparent communication addresses any expected fears and uncertainty that accompany AI use in the workplace. Build fail-safe sandbox environments for exploration, where the goal is to learn through trial and error and develop skills through experiential learning. Policy changes Changing policy and guidelines in the organization can prove effective in encouraging more women to use AI at work. Apart from clear guidelines around responsible AI use, policies specifically allowing the use of AI can help close the gap. In a study conducted by the Norwegian School of Economics (NHH), male students were less likely to view using AI as 'cheating.' Additionally, when policies forbid the use of AI, male students tended to use it anyway, while women adhered to the policy. When a policy explicitly allowing the use of AI was put in place, over 80% of both men and women used it, suggesting that policies encouraging the use of AI can help trigger more women to use it. Crucially, organizations should make a proactive effort to bring in more women into the AI conversation at every level. Diverse perspectives can prove effective in catching blind spots, and this approach sends a powerful message that representation matters. When women see their peers proactively shaping AI application in a safe, fair, and impactful way, they will feel more confident in participating as well.


Newsweek
10-07-2025
- Newsweek
AI is Creating a New Gender Divide
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. The AI revolution isn't ahead of us; it's here. But, for a technology that's been heralded as the future, it risks bringing with it problems from the past. Women are adopting generative AI technology at a slower rate than men—data from the Survey of Consumer Expectations found that 50 percent of men are using generative AI tools, compared to 37 percent of women. Further research from Harvard Business School Associate Professor Rembrand Koning found that women are adopting AI tools at a 25 percent lower rate. So, what's behind women's hesitation to adopt AI? Whether it's deepfake pornography, discrimination from AI hiring technology, or forms of digital violence online, research and data suggest that women have a fundamentally different relationship to AI than men do. The result? An AI gender gap, where women are being left behind in the technological revolution. Photo-illustration by Newsweek/Getty/Canva Newsweek spoke to the experts to find out more about how AI's misogyny maintenance is creating a new gender divide. What Is The AI Gender Gap? A 2025 survey from the National Organization for Women (NOW) and Icogni found that 25 percent of women had experienced harassment enabled by technology, including AI-generated deepfake pornography. A study from the Berkeley Haas Center for Equity, Gender, and Leadership, meanwhile, analyzed 133 AI systems from different industries. It found that 44 percent showed gender bias. Beyond the studies and the data, what is the actual impact of this gender disparity on women? Enter: the AI gender gap. Professor Ganna Pogrebna, Lead for Behavioral Data Science at the Alan Turing Institute and Executive Director at the AI and Cyber Futures Institute, told Newsweek over email, "There is mounting evidence that early negative experiences with AI systems—particularly those involving misogyny, sexualization, or coercion—can have profound psychological, behavioral, and societal consequences for women and girls." "These harms are not abstract; they are embodied in concrete experiences, amplified through algorithmic systems," Pogrebna said. And AI-inflicted harms begin at a young age. A 2024 report from the Center for Democracy & Technology found that generative AI technologies are worsening the sharing of non-consensual intimate imagery in schools and that female students are most often depicted in this deepfake imagery. So, what might be the long-term impacts on women and girls if they are having negative or traumatic experiences with AI? Laura Bates, activist and author of The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny, told Newsweek, "I think we will see a widening gap in terms of women's access to and uptake of new technologies." Bates said that this will include AI and that this will have "a devastating impact on everything from women's job prospects and careers to their involvement in further developments in the sector, which will, in turn, continue to intensify the problem because it will mean that new tools are tailored towards men as the majority of users." Asked if there is a risk that these negative experiences could lead to disengagement with future technologies, putting women on the back foot, Bates said, "Absolutely." "We already see how differently men and women use and experience existing forms of technology," Bates said. Both men and women experience forms of online harassment, according to the Pew Research Centre, which found in 2021 that 41 percent of Americans had experienced some kind of harassment online; harassment takes different forms. The Pew Research Centre found that 33 percent of women under 35 report experiencing sexual harassment online, compared to 11 percent of men, a figure which doubled from 2017 to 2021. "Women's use of tech is mediated by an entirely different online experience than men's, marked by abuse, harassment, doxing, threats, stalking and other forms of tech facilitated gender-based violence," Bates said, adding, "It is inevitable that the barrage of abuse women and girls face online, combined with the gender bias inherently baked into many emerging tools, are going to have a chilling effect in terms of women's uptake and participation in new forms of tech." Pogrebna echoed this: "These traumatic experiences can embed deep mistrust in AI systems and digital institutions." Woman photographs a Humanoid Robot from AI Life with Bio-Inspired communicative AI, on display at the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 10, 2024. Woman photographs a Humanoid Robot from AI Life with Bio-Inspired communicative AI, on display at the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 10, 2024. FREDERIC J. BROWN/AFP via Getty Images Newsweek also spoke with Dr. Sarah Myers West, co-executive director at the AI Now Institute. In a phone call with Newsweek, she said, "There are disproportionate patterns of reinforcing inequality in ways that lead to harm for women and girls and people of other minorities." West pointed to "the way AI is intermediating access to our resources or our life chances," and noted, "the AI that gets used, say, in a hiring process and reinforces is historical employment-based discrimination." West said that this is affecting people in ways that are "profoundly consequential." In 2018, Reuters reported that Amazon had scrapped an AI recruiting tool that was showing bias against women. In 2024, UNESCO's research highlighted that gender bias in AI hiring tools may penalize women through the reproduction of regressive stereotypes. Asked if negative experiences with AI in hiring scenarios could lead to a sense of mistrust and disengagement, West said, "I think rightly so, if it's being used in that way." A Problem from the Past, Reinvented for the Future AI might be increasingly prevalent, but the discourse over it is increasingly polarized. A 2025 survey from YouGov found that one-third of Americans are concerned about the possibility that AI will cause the end of the human race. Additionally, the survey found that Americans are more likely to say that AI will have a negative effect on society than on their own life and that most Americans don't trust AI to make ethical decisions. But as these apocalyptic alarms sound, concerns over how AI is further encoding misogyny into the fabric of society fall through the cracks. Back in 2024, a report from the UN said that AI is mirroring gendered bias in society, and gender disparity is already pronounced in the tech industry, with the World Economic Forum reporting in 2023 that women account for only 29 percent of science, technology, engineering and math (STEM) workers. "There is a growing body of evidence showing that AI systems reflect and amplify biases present in the datasets on which they are trained. This includes gender biases, sexualization of women, and reinforcement of harmful stereotypes," Pogrebna said. She added that large language models trained on "internet corpora" are risking "encoding toxic gender stereotypes and normalizing misogynistic narratives." A 2024 report from UNESCO found that "AI-based systems often perpetuate (and even scale and amplify) human, structural and social biases," producing gender bias, as well as homophobia and racial stereotyping. Newsweek spoke with Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute at the University of Oxford in the United Kingdom, about this. "If AI is somewhat a mirror of society," Wachter said, "It kind of indirectly shows you where your place in the world is." Wachter then pointed to examples of gender bias in AI, including bias in image generators and text prediction, where AI is more likely to assume a male gender for professions like doctors, and a female gender for professions like nurses. A 2024 study in the JAMA Open Network found that when generating images of physicians, AI text-to-image generators are more likely to depict people who are white and male. "It's a tacit kind of reminder that certain spots are reserved for you and others are not," Wachter said. "We have to think about what it does to young women and girls." "How can we praise the technology to be so perfect when it is so problematic for a large portion of our society, right? And just ask the question, who is this technology actually good for? And who does it actually benefit?" Wachter said. She added, "It gives people a very early idea of what your role is supposed to look like in society." Pointing to the issues with AI, Wachter said, "We would never do this with a car, right? We would never just say, you go and drive. I know it's failing all the time." "What does it say about the value of being a woman?" she said. "If it's okay that this injury will happen, we know it will happen, but we're going to bring it on the market anyway, and we're going to fix it later." Newsweek also spoke with Dr. Kanta Dihal, a lecturer in science communication at Imperial College London, who shared some of the concerns that Wachter does. "There is so much that regularly goes wrong around the topics of women and technology in the broader sense," Dihal said. In terms of the relationship women have with AI, Dihal said there is a feeling of "Is this for me, or is this meant to keep me in my place? Or make things worse for me? Am I the kind of person that the creators of this technology had in mind when they designed it?" "So many different career paths and our schools as well are indeed introducing AI related technologies that if you don't want to use them, you're already sometimes on the back foot," Dihal said, adding, "It's going to be both a matter of being disadvantaged in school and career progression." A woman walks past the neon art installation "imAGIne AGI" during the press preview of the XXIV Triennale di Milano at Triennale Design Museum on May 12, 2025, in Milan, Italy. A woman walks past the neon art installation "imAGIne AGI" during the press preview of the XXIV Triennale di Milano at Triennale Design Museum on May 12, 2025, in Milan, Ahead So, what would inclusion in AI look like? Bates told Newsweek that we need to see government regulation of AI technology "at the point they are rolled out to public or corporate use" in order to ensure that safety and ethics standards are met before implementation, "not after women and marginalized communities have already faced significant discrimination." She added, "With AI technologies poised to become inextricably intertwined with almost every aspect of our personal and professional lives, that must change in order to ensure that women, girls, and marginalized groups are able to reap the same benefits from these technologies as everybody else, without suffering negative consequences." Meanwhile, Pogrebna told Newsweek, "The marginalisation of women in AI is not an inevitable by-product of technological advancement—it is the result of design choices, governance gaps, and historical inequities embedded in data and institutions. A multi-pronged approach that includes technical, procedural, legal, and cultural reforms is not only possible but has already demonstrated early success in multiple domains." She added that technical fixes are necessary but insufficient without regulatory frameworks to enforce accountability. As AI technology continues to develop and become more prevalent, it's clear that the fabric of society continues to change at a rapid pace, and the dream of a tech revolution that leads to a fairer society is still there. What's unclear is if AI is doomed to code a world that's bugged with the same prejudice as the one that came before it.


Time Magazine
25-05-2025
- Science
- Time Magazine
Who does AI help the most?
By One of the big questions about genAI's implication for work is whether it gives a greater performance boost to less experienced, lower-performing workers or more experienced, higher-performing workers. Research skews toward the former, but a few studies have found that access to genAI tools like ChatGPT help high performers more. One of the most well-known studies in the second camp has now been discredited. The paper—which was widely covered in media outlets, including Charter —found, among other things, that a genAI tool that helped material scientists discover new materials helped high-performing scientists significantly more than it helped their lower-performing peers. The author of the paper is no longer at MIT, and the university said it now 'has no confidence in the veracity of the research contained in the paper.' This came as a shock to us, as that research was important to our understanding of how AI may impact work and inequality. Given its retraction, we wanted to synthesize what the other current research says about this question of whether AI will widen or narrow performance gaps between workers. The question matters for understanding whether AI will increase inequality, explains Rembrand Koning, an associate professor at Harvard Business School, who co-authored a paper on this topic. But, he adds, it also has important implications for how AI might impact less experienced workers. 'If [AI] helps people with more experience, [who] tend to be the people who can do things better, it might take out the lowest rungs of the labor market because we think AI can have such large effects.' Longer term, Koning adds, 'there may be policy solutions or organizational changes that can get around some of these problems…we need to know the facts to start understanding how the economy might respond.' Most studies that look into this question find that genAI has an equalizing effect, helping less experienced, lower-performing workers more than their more experienced, higher-performing peers. With the retraction of the material sciences paper, I know of only a few papers that have found the opposite to be true. The AI-as-an-equalizer result has shown up in studies of everything from customer-support work to writing and consulting tasks. There are two important aspects of these studies to keep in mind when applying their results to other settings. First, the tasks in question are areas where we know AI already performs very well. Second, in these studies, what the genAI tool produces is often relatively close to—or, in some cases, is—the final product. So the mechanism through which AI levels the playing field in these studies is essentially by doing much, sometimes most, of the work. The AI-widens-performance-gaps result has shown up in a few papers, including one that looked at college students in a debate competition and another that studied entrepreneurs in Kenya. Both studies suggest judgment plays a crucial role in whether or not someone benefits from AI. In the Kenya context, for example, the AI tool provided a range of advice to entrepreneurs and it was up to them to decide what to do with it. 'Those who have the judgment do better because they're able to be like, 'Yeah, these 10 pieces of advice—six are terrible, two are neutral, two are actually really good. Let me focus on the two that can actually help my business,'' explains Harvard Business School's Koning, one of the study's co-authors. So, will AI widen or narrow performance gaps? My view is that it depends on a job's level of autonomy. If you're in a job where you help set your goals, determine how to reach those goals, and make many decisions throughout your day, AI likely helps you more if you're a high performer with strong judgment. If, however, you're in a role that's more constrained, requires less decision-making, and performs a pre-defined set of tasks that AI can do well, then AI likely narrows the performance gap. 'The way I think of that is when you start your day and you're looking at the possible number of paths you can go down…how many paths could I possibly choose? And then how different are the end destinations of those paths?' explains Ben Weidmann, director of research at the Harvard Skills Lab and co-author of a related paper. 'My hypothesis, which is very similar to yours…is that jobs where there are more paths are going to disproportionately have the AI gains go to people who are high-skilled' in their decision-making capabilities, says Weidmann. 'The other way to think about it is [whether] the AI [is] producing the inputs or the outputs for your job,' says Koning. In many of the studies that find that AI narrows performance gaps, the AI's output becomes the worker's output, with varying degrees of editing. Koning points out that in the debate and entrepreneurship studies, the AI's answers were the inputs for the tasks of debating an opponent or making critical business decisions and implementing them.
Yahoo
20-05-2025
- Science
- Yahoo
AI in the workplace is nearly 3 times more likely to take a woman's job as a man's, UN report finds
As AI transforms workplaces, the technology has an outsized impact on women's jobs, according to new data from the United Nations' International Labour Organization and Poland's National Research Institute. To help future-proof their careers, women can use AI to augment their jobs, but are less likely to engage with the technology than their male counterparts, according to Harvard Business School professor Rembrand Koning. As workers grapple with anxiety around artificial intelligence replacing them, women in the workplace may have extra reason to fear. Jobs traditionally held by women are much more exposed to AI than those traditionally held by men, according to new data from the United Nations' International Labour Organization (ILO) and Poland's National Research Institute (NASK). In higher income countries, jobs with the highest risk of AI automation make up about 9.6% of women's jobs, compared to 3.5% of jobs among men, the report released Tuesday found. More broadly, 25% of global jobs are potentially exposed to generative AI, a percentage that increases to 34% among higher income countries. The report notes clerical and administrative jobs have the highest exposure to AI, which could be one reason why AI poses an outsized risk to women workers. Between 93% and 97% of secretary and administrative assistant positions in the U.S. were held by women between 2000 and 2019, according to the U.S. Census Bureau. Comparatively, women made up between 40% and 44% of the workforce in the same 20-year period. Secretaries and administrators are the fifth most common professions for women in the U.S., according to the Department of Labor. Notably, the study does not mention caretaker jobs such as health aides that require emotional labor and are more likely to be held by women; they are considered more AI-proof. While AI has shown potential to gobble up jobs like software engineers and computer programmers, the technology may also threaten entry-level positions across white-collar industries beyond administrative roles. A Bloomberg report in April found AI could replace more than half the tasks performed by market research analysts and two-thirds of tasks done by sales representatives. The technology could perform only 9% and 21% of the respective tasks of those positions' managers. The ILO-NASK report isn't meant to say that AI will eliminate clerical or entry-level jobs. Rather, these jobs still require human involvement in some capacity, and identifying jobs that AI can partially complete can help prepare the workforce in those industries for technological changes. 'This index helps identify where GenAI is likely to have the biggest impact, so countries can better prepare and protect workers,' Marek Troszyński, senior expert at NASK, said in the report. Rembrand Koning, associate professor of business administration at Harvard Business School, believes one key to women future-proofing workplace roles that may be more exposed to AI is to follow the framework of viewing AI as a tool, not a threat. 'This goes back to the distinction between automation versus augmentation when we think about AI,' Koning told Fortune. 'We can think of this as a threat, which is that it's going to automate away a lot of these clerical jobs that might be held more by women. On the other hand, we can think of AI as automating a lot of this work, of allowing [workers] to take on tasks that might be higher paying, or that there might be more competition.' While Koning sees a path forward for workers to use AI to their benefit, he also sees a gender barrier: Women are using AI tools at an average 25% lower rate than men, his research found. There's not one clear reason for this disparity, Koning said, but one explanation outlined in a working paper co-authored by Koning is women are more concerned about the ethics of AI. Some fear they will be judged as cheating for using the technology or that leaning on AI tools will cause male colleagues to question their intelligence. 'Men seem to be much more confident—shall I say, overconfident—that, if they use AI, they'll still get all the benefits,' Koning said. The onus of changing who feels comfortable accessing AI falls not on the women workers, but on leaders in the workplace, Koning said. In many workplaces, workers, usually men, experiment with AI tools in the shadows. Even if an office doesn't have a license for or partnership with an AI company, its management should still set clear expectations and resources on how to use the technology, Koning suggested. 'If we want to make sure it's inclusive, it includes all workers, it's the job of a leader to bring everybody in,' he said. This story was originally featured on