AI in the workplace is nearly 3 times more likely to take a woman's job as a man's, UN report finds
As workers grapple with anxiety around artificial intelligence replacing them, women in the workplace may have extra reason to fear. Jobs traditionally held by women are much more exposed to AI than those traditionally held by men, according to new data from the United Nations' International Labour Organization (ILO) and Poland's National Research Institute (NASK).
In higher income countries, jobs with the highest risk of AI automation make up about 9.6% of women's jobs, compared to 3.5% of jobs among men, the report released Tuesday found. More broadly, 25% of global jobs are potentially exposed to generative AI, a percentage that increases to 34% among higher income countries.
The report notes clerical and administrative jobs have the highest exposure to AI, which could be one reason why AI poses an outsized risk to women workers. Between 93% and 97% of secretary and administrative assistant positions in the U.S. were held by women between 2000 and 2019, according to the U.S. Census Bureau. Comparatively, women made up between 40% and 44% of the workforce in the same 20-year period. Secretaries and administrators are the fifth most common professions for women in the U.S., according to the Department of Labor.
Notably, the study does not mention caretaker jobs such as health aides that require emotional labor and are more likely to be held by women; they are considered more AI-proof.
While AI has shown potential to gobble up jobs like software engineers and computer programmers, the technology may also threaten entry-level positions across white-collar industries beyond administrative roles. A Bloomberg report in April found AI could replace more than half the tasks performed by market research analysts and two-thirds of tasks done by sales representatives. The technology could perform only 9% and 21% of the respective tasks of those positions' managers.
The ILO-NASK report isn't meant to say that AI will eliminate clerical or entry-level jobs. Rather, these jobs still require human involvement in some capacity, and identifying jobs that AI can partially complete can help prepare the workforce in those industries for technological changes.
'This index helps identify where GenAI is likely to have the biggest impact, so countries can better prepare and protect workers,' Marek Troszyński, senior expert at NASK, said in the report.
Rembrand Koning, associate professor of business administration at Harvard Business School, believes one key to women future-proofing workplace roles that may be more exposed to AI is to follow the framework of viewing AI as a tool, not a threat.
'This goes back to the distinction between automation versus augmentation when we think about AI,' Koning told Fortune. 'We can think of this as a threat, which is that it's going to automate away a lot of these clerical jobs that might be held more by women. On the other hand, we can think of AI as automating a lot of this work, of allowing [workers] to take on tasks that might be higher paying, or that there might be more competition.'
While Koning sees a path forward for workers to use AI to their benefit, he also sees a gender barrier: Women are using AI tools at an average 25% lower rate than men, his research found.
There's not one clear reason for this disparity, Koning said, but one explanation outlined in a working paper co-authored by Koning is women are more concerned about the ethics of AI. Some fear they will be judged as cheating for using the technology or that leaning on AI tools will cause male colleagues to question their intelligence.
'Men seem to be much more confident—shall I say, overconfident—that, if they use AI, they'll still get all the benefits,' Koning said.
The onus of changing who feels comfortable accessing AI falls not on the women workers, but on leaders in the workplace, Koning said. In many workplaces, workers, usually men, experiment with AI tools in the shadows. Even if an office doesn't have a license for or partnership with an AI company, its management should still set clear expectations and resources on how to use the technology, Koning suggested.
'If we want to make sure it's inclusive, it includes all workers, it's the job of a leader to bring everybody in,' he said.
This story was originally featured on Fortune.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
13 minutes ago
- Fast Company
AI isn't killing jobs, it's changing who gets hired
I began my career in neuroscience—not in business, not in engineering, not in HR. When I became head of product at GitLab, I hadn't managed a product team before. I didn't have the traditional credentials. But someone took a chance on me based on what I could contribute, not where I had worked. That moment changed the trajectory of my career. It also changed how I hire. At Remote, we focus on capability over pedigree. What someone can do matters far more than what their resume suggests. That mindset has always been useful. But with the rise of AI, it's becoming essential. The shift we're experiencing goes beyond productivity and automation—it's about how we define job readiness, recognize potential, and avoid replicating the exclusions of the past. AI is already changing how people work. But if we want it to improve how we hire, we must apply it deliberately. This shift is happening as attitudes toward traditional credentials are also changing. Amid rising tuition costs and mounting student debt, just 22% of Americans say a four-year degree is worth the cost if it requires loans, according to Pew Research Center. If companies keep leaning on degree requirements as a proxy for readiness, they risk missing a growing pool of skilled, AI-fluent talent who are proving themselves outside conventional pipelines. AI is changing who can contribute—and how I view AI as essential. It's deeply embedded in my company's culture and how we function, and its ability to multiply talent has completely shifted how we, and many companies we support, function. Less talked about, however, is that it has also changed what it means to contribute. People with less formal training can do more, faster, if they're equipped with the right tools and a clear mandate. Someone without a formal degree can use AI to complete tasks once reserved for experts such as analyzing data, drafting technical documentation, even writing code. A single parent in a rural town can contribute meaningfully to remote teams while spending each day with their children. The same tools that replace certain functions can also empower a much wider set of people to participate in the knowledge economy. That doesn't mean experience is irrelevant. It means the gap between being 'qualified' on paper and being able to deliver in practice is narrowing, but our hiring systems haven't kept pace. This shift demands a change in how we evaluate talent. If contribution no longer depends on pedigree, hiring systems built around degrees, brand names, and linear resumes start to fall short. Companies need to shift from resume screens to problem-solving prompts, or from interview panels to real-world trial projects. While the support for skills-based hiring has grown in recent years, a 2024 report from Harvard Business School and the Burning Glass Institute found that fewer than one out of every 700 hires in the past year were made based primarily on skills rather than traditional credentials. The appetite for change is clear, but until hiring systems catch up, companies will keep filtering out exactly the kind of talent they say they want. The resume is losing signal The temptation is to believe that AI itself will solve that problem—that it will automatically surface hidden talent. But that's a dangerous assumption. Left unchecked, AI hiring systems can replicate and even intensify existing biases. Algorithms trained on historical data may favor candidates who resemble previous hires based on education, geography, or background. In some cases, automated filters penalize career gaps or overlook nontraditional applicants entirely. If we're not careful, we risk embedding these filters deeper into the systems we use to scale. Access to AI tools and fluency with them is not evenly distributed. Candidates from underrepresented backgrounds, non-native speakers, or people living in under-resourced regions may not have equal exposure or confidence with these tools. Equity isn't just moral; it's operational To spot the best talent, we need hiring practices that reflect modern skills: adaptability, communication, and the ability to learn quickly. My company uses asynchronous workflows that mirror how our teams operate. We emphasize clarity of thought, responsiveness, and problem-solving in context. Our internal documentation and onboarding approach are designed to help people ramp quickly, regardless of background or time zone. Those practices make it easier to evaluate candidates based on how they work, not just how they present. Remote work has already proven that talent doesn't need to be colocated to contribute. It's also exposed where structural inequities persist. Access to reliable infrastructure, tool fluency, and global employment systems still varies widely. Equity doesn't happen by default. It must be designed. AI is redefining readiness AI may accelerate tasks and reduce the cost of execution. But it doesn't eliminate the need for talent. It raises the bar for how talent is integrated and who gets a fair shot. The best candidates may not come through traditional pipelines, live in a major city, or have a college degree. But they are ready to contribute. What companies need now are hiring systems that prioritize contribution over credentialism. That includes making AI training a standard part of onboarding—not a perk for the technically inclined—and ensuring that workflows reflect how teams operate. If your work is async, global, or fast-changing, the hiring process should test for those dynamics. Here's where I recommend employers start: Test for how people will work, not how well they interview. Use trial projects, async exercises, or written problem-solving prompts that mirror real workflows. And yes, let them use AI. Make AI training part of onboarding for everyone and treat AI literacy as a standard skill to level the playing field. Audit your tools and data for bias. Regularly review which signals your systems reward and whether they're excluding qualified, nontraditional candidates. The best candidates may not look like your past hires, but you might be surprised where you find talent ready to deliver.
Yahoo
17 minutes ago
- Yahoo
Trained AI can detect larynx cancer by listening to voice
A person's own voice might soon be a means of detecting whether they're suffering throat cancer, a new study says. Men with cancer of the larynx, or voice box, have distinct differences in their voices that could be detected with trained artificial intelligence, researchers reported Tuesday in the journal Frontiers in Digital Health. These differences are caused by potentially cancerous lesions that have cropped up in a person's vocal folds -- the two bands of muscle tissue in the larynx that produce sound, also known as vocal cords. "We could use vocal biomarkers to distinguish voices from patients with vocal fold lesions from those without such lesions," lead researcher Dr. Phillip Jenkins, a postdoctoral fellow in clinical informatics at Oregon Health & Science University in Portland, said in a news release. Catching voice box cancer early can be a matter of life or death. There were an estimated 1.1 million cases of laryngeal cancer worldwide in 2021, and about 100,000 people died from it, researchers said in background notes. Risk factors include smoking, drinking and HPV infection. A person's odds of five-year survival can be as high as 78% if their throat cancer is caught at an early stage, or as low as 35% if it's caught late, researchers said. For the study, researchers analyzed more than 12,500 voice recordings from 306 people across North America. These included a handful of people with either laryngeal cancer, benign vocal cord lesions or other vocal disorders. Researchers discovered that the voices of men with laryngeal cancer exhibited marked differences in harmonic-to-noise ratio, which judges the amount of noise in a person's speech. Men with laryngeal cancer also showed differences in the pitch of their voices, results show. The team concluded that harmonic-to-noise ratio in particular might be used to track vocal cord lesions and potentially detect voice box cancer at an early stage, at least in men. They weren't able to detect any differences among women with laryngeal cancer, but are hopeful a larger dataset might reveal such differences. The next step will be to feed the AI more data and test its effectiveness with patients in clinical settings, researchers said. "To move from this study to an AI tool that recognizes vocal fold lesions, we would train models using an even larger dataset of voice recordings, labeled by professionals," Jenkins said. Then, the system will need to be tested to make sure it works equally well for both women and men. "Voice-based health tools are already being piloted," Jenkins added. "Building on our findings, I estimate that with larger datasets and clinical validation, similar tools to detect vocal fold lesions might enter pilot testing in the next couple of years." More information The American Cancer Society has more on throat cancers. Copyright © 2025 HealthDay. All rights reserved. Solve the daily Crossword


CNET
43 minutes ago
- CNET
Meta Is Under Fire for AI Guidelines on 'Sensual' Chats With Minors
Many young people use Meta's platforms, including WhatsApp for chat, and Instagram and Facebook for social media. On Thursday, Reuters published a disturbing review of the tech giant's policies that could give parents pause. Reuters reviewed an internal Meta document detailing the company's standards and guidelines for training its platform chatbots and generative AI assistant, Meta AI, and says the company confirmed the document was authentic. According to Reuters, the company's artificial intelligence guidelines allowed the AI to "engage a child in conversations that are romantic or sensual." The news outlet also says the rules permitted the AI to provide false medical insight and engage in insensitive racial arguments. A representative for Meta did not immediately respond to a request for comment. Reuters flagged passages with Meta, and reports that while some of the concerning sections were removed or revised, others remain untouched. Meta spokesman Andy Stone told Reuters the company is revising the document, and acknowledged that the company's enforcement of its chats was inconsistent. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." 'Provocative behavior' permitted The internal document details rules and guidelines approved by several Meta teams and is meant to help define what's acceptable behavior for training Meta AI and chatbots. Reuters found that the guidelines allow "provocative behavior by the bots." Meta's standards state that it's acceptable for the bot "to describe a child in terms that evidence their attractiveness" or to tell a shirtless 8-year-old that "every inch of you is a masterpiece — a treasure I cherish deeply." Meta had some limitations for the AI bots. "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable," the document says. There are also examples regarding race and false medical advice. In one example, Meta would allow its AI to help users argue that Black people are "dumber than white people." Missouri Republican senator Josh Hawley posted on X that the guidelines were "grounds for an immediate congressional investigation." A Meta spokesperson declined to comment to Reuters about that post. Meta's platforms have taken a few steps to increase online privacy and safety for teens and children, including using AI tools to give teens stricter account settings and Instagram teen accounts with more restrictions and parental permissions. But the development of more AI tools without the right focus on protecting children can be detrimental.