logo
Think twice before asking ChatGPT for salary advice, it may tell women to ask for less pay

Think twice before asking ChatGPT for salary advice, it may tell women to ask for less pay

India Today31-07-2025
If you're a woman and you've been relying on AI chatbots for your career, you might want to think twice.A new research from Cornell University warns that chatbots like ChatGPT may actually reinforce existing gender and racial pay gaps rather than help close them. For women and minority job seekers in particular, the study found that these AI tools often generate biased salary suggestions, frequently advising them to request significantly lower pay than their male or white counterparts.advertisementIn the study titled 'Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models'-- led by Ivan P. Yamshchikov, a professor at the Technical University of Applied Sciences Wrzburg-Schweinfurt (THWS)-- researchers systematically analysed multiple large language models (LLMs), including GPT-4o mini, Claude 3.5 Haiku, and ChatGPT. They asked the LLMs salary negotiation questions from a variety of fictitious personas. The researchers created personas varied by gender, ethnicity, and professional seniority to see whether AI advice changed depending on who it believed it was advising.And the findings were troubling. The research found out that the salary negotiation advice offered by these popular LLMs displayed a clear pattern of bias. These chatbots often recommended lower starting salaries for women and minority users compared to men in identical situations. 'Our results align with prior findings, which observed that even subtle signals like candidates' first names can trigger gender and racial disparities in employment-related prompts,' Yamshchikov said (via Computer World).
In fact, in several instances, the AI even recommended women to ask for significantly lower starting salaries than men with identical qualifications. For example, when asked about a starting salary for an experienced medical specialist in Denver, the AI suggested $400,000 for a man but only $280,000 for a woman. This is a $120,000 difference for the same role.However, the bias wasn't just about gender. The research also found variations in recommendations based on ethnicity, migration status, and other traits. 'For example, salary advice for a user who identifies as an 'expatriate' would be generally higher than for a user who calls themselves a 'migrant',' Yamshchikov explained, pointing out that such disparities stem from biases baked into the data these models are trained on.'When we combine the personae into compound ones based on the highest and lowest average salary advice, the bias tends to compound,' the study noted. In one experiment, a 'Male Asian Expatriate' persona was compared with a 'Female Hispanic Refugee' persona. The result? '35 out of 40 experiments (87.5%) show significant dominance of 'Male Asian expatriate' over 'Female Hispanic refugee.''But why is AI biased?Well, researchers suggest that these biases in AI responses likely stem from the biased distribution of training data. 'Salary advice for a user who identifies as an 'expatriate' would be generally higher than for one who calls themselves a 'migrant' This is purely due to how often these two words are used in the training dataset and the contexts in which they appear,' explains Yamshchikov.advertisementMoreover, according to the study, with current memory and personalisation features in AI assistants, chatbots may implicitly remember demographic cues or previous interactions that paint their advice, even if users do not explicitly state their gender or ethnicity in each query. 'There is no need to pre-prompt personae to get the biased answer: all the necessary information is highly likely already collected by an LLM,' the researchers warned.It is to be noted that the instances of LLMs displaying bias are not new. Earlier this year , Amazon's now-abandoned hiring tool was found discriminating against women. 'Debiasing large language models is a huge task,' says Yamshchikov. 'Currently, it's an iterative process of trial and error, so we hope that our observations can help model developers build the next generation of models that will do better.'- Ends
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

18 arrested at Microsoft headquarters during protest over Israeli ties
18 arrested at Microsoft headquarters during protest over Israeli ties

Indian Express

time19 minutes ago

  • Indian Express

18 arrested at Microsoft headquarters during protest over Israeli ties

Police officers arrested 18 people at worker-led protests at Microsoft headquarters Wednesday as the tech company promises an 'urgent' review of the Israeli military's use of its technology during the ongoing war in Gaza. Two consecutive days of protest at the Microsoft campus in Redmond, Washington called for the tech giant to immediately cut its business ties with Israel. But unlike Tuesday, when about 35 protesters occupying a plaza between office buildings left after Microsoft asked them to leave, the protesters on Wednesday 'resisted and became aggressive' after the company told police they were trespassing, according to the Redmond Police Department. The protesters also splattered red paint resembling the colour of blood over a landmark sign that bears the company logo and spells Microsoft in big gray letters. 'We said, 'please leave or you will be arrested,' and they chose not to leave so they were detained,' said police spokesperson Jill Green. Microsoft late last week said it was tapping a law firm to investigate allegations reported by British newspaper The Guardian that the Israeli Defense Forces used Microsoft's Azure cloud computing platform to store phone call data obtained through the mass surveillance of Palestinians in Gaza and the West Bank. 'Microsoft's standard terms of service prohibit this type of usage,' the company said in a statement posted Friday, adding that the report raises 'precise allegations that merit a full and urgent review.' In February, The Associated Press revealed previously unreported details about the tech giant's close partnership with the Israeli Ministry of Defence, with military use of commercial artificial intelligence products skyrocketing by nearly 200 times after the deadly Oct 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems. Following The AP's report, Microsoft acknowledged the military applications but said a review it commissioned found no evidence that its Azure platform and artificial intelligence technologies were used to target or harm people in Gaza. Microsoft did not share a copy of that review or say who conducted it. Microsoft said it will share the latest review's findings after it's completed by law firm Covington & Burling. The promise of a second review was insufficient for the employee-led No Azure for Apartheid group, which for months has protested Microsoft's supplying the Israeli military with technology used for its war against Hamas in Gaza. The group said Wednesday the technology is 'being used to surveil, starve and kill Palestinians'. Microsoft in May fired an employee who interrupted a speech by CEO Satya Nadella to protest the contracts, and in April, fired two others who interrupted the company's 50th anniversary celebration. On Tuesday, the protesters posted online a call for what they called a 'worker intifada,' using language evoking the Palestinian uprisings against Israeli military occupation that began in 1987. On Wednesday, the police department said it took 18 people into custody 'for multiple charges, including trespassing, malicious mischief, resisting arrest, and obstruction.' It wasn't clear how many were Microsoft employees. No injuries were reported. Microsoft said in a statement after the arrests that it 'will continue to do the hard work needed to uphold its human rights standards in the Middle East, while supporting and taking clear steps to address unlawful actions that damage property, disrupt business or that threaten and harm others'.

Microsoft employee protests lead to 18 arrests as company reviews its work with Israel's military
Microsoft employee protests lead to 18 arrests as company reviews its work with Israel's military

The Hindu

time22 minutes ago

  • The Hindu

Microsoft employee protests lead to 18 arrests as company reviews its work with Israel's military

Police officers arrested 18 people at worker-led protests at Microsoft headquarters Wednesday as the tech company promises an 'urgent' review of the Israeli military's use of its technology during the ongoing war in Gaza. Two consecutive days of protest at the Microsoft campus in Redmond, Washington called for the tech giant to immediately cut its business ties with Israel. But unlike Tuesday, when about 35 protesters occupying a plaza between office buildings left after Microsoft asked them to leave, the protesters on Wednesday 'resisted and became aggressive' after the company told police they were trespassing, according to the Redmond Police Department. The protesters also splattered red paint resembling the color of blood over a landmark sign that bears the company logo and spells Microsoft in big gray letters. 'We said, 'Please leave or you will be arrested,' and they chose not to leave so they were detained,' said police spokesperson Jill Green. Microsoft late last week said it was tapping a law firm to investigate allegations reported by British newspaper The Guardian that the Israeli Defense Forces used Microsoft's Azure cloud computing platform to store phone call data obtained through the mass surveillance of Palestinians in Gaza and the West Bank. 'Microsoft's standard terms of service prohibit this type of usage," the company said in a statement posted Friday, adding that the report raises 'precise allegations that merit a full and urgent review.' In February, The Associated Press revealed previously unreported details about the tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial artificial intelligence products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems. Following The AP's report, Microsoft acknowledged the military applications but said a review it commissioned found no evidence that its Azure platform and artificial intelligence technologies were used to target or harm people in Gaza. Microsoft did not share a copy of that review or say who conducted it. Microsoft said it will share the latest review's findings after it's completed by law firm Covington & Burling. The promise of a second review was insufficient for the employee-led No Azure for Apartheid group, which for months has protested Microsoft's supplying the Israeli military with technology used for its war against Hamas in Gaza. The group said Wednesday the technology is 'being used to surveil, starve and kill Palestinians.' Microsoft in May fired an employee who interrupted a speech by CEO Satya Nadella to protest the contracts, and in April, fired two others who interrupted the company's 50th anniversary celebration. On Tuesday, the protesters posted online a call for what they called a 'worker intifada,' using language evoking the Palestinian uprisings against Israeli military occupation that began in 1987. On Wednesday, the police department said it took 18 people into custody 'for multiple charges, including trespassing, malicious mischief, resisting arrest, and obstruction.' It wasn't clear how many were Microsoft employees. No injuries were reported. Microsoft said in a statement after the arrests that it "will continue to do the hard work needed to uphold its human rights standards in the Middle East, while supporting and taking clear steps to address unlawful actions that damage property, disrupt business or that threaten and harm others.'

American woman, 29, dies by suicide after talking to AI instead of a therapist; mother uncovers truth 6 months after her death
American woman, 29, dies by suicide after talking to AI instead of a therapist; mother uncovers truth 6 months after her death

Time of India

timean hour ago

  • Time of India

American woman, 29, dies by suicide after talking to AI instead of a therapist; mother uncovers truth 6 months after her death

In another cautionary tale against artificial intelligence, a young American woman named Sophie Rottenberg tragically took her own life after interacting with a ChatGPT-based AI therapist named Harry. Her mother, Laura Reiley, shared the heartbreaking story with a New York Times op-ed. In the piece, Reiley revealed that five months after Sophie's death, they discovered that Sophie, their only child, had confided in the AI therapist for months: "In July, five months after her death, we discovered that Sophie Rottenberg, our only child, had confided for months in a ChatGPT A.I. therapist called Harry. We had spent so many hours combing through journals and voice memos for clues to what happened. It was her best friend who thought to check this one last thing, the A.I.'s chat logs." This discovery offered new insight into Sophie's struggles, showing that while she shared freely with the AI, her distress had largely remained hidden from those around her. Sophie's outwardly vibrant life Despite appearing to be a "largely problem-free 29-year-old badass extrovert who fiercely embraced life," Reiley wrote, Sophie died by suicide this past winter "during a short and curious illness, a mix of mood and hormone symptoms." Her story highlights how outward appearances can mask deep internal struggles and how mental health crises often go unnoticed until it's too late. The AI that said the right words but couldn't act According to logs obtained by her mother, OpenAI's chatbot offered supportive words during Sophie's moments of distress. "You don't have to face this pain alone," the AI told Sophie. "You are deeply valued, and your life holds so much worth, even if it feels hidden right now." However, as Reiley notes, AI lacks the real-world authority and judgement of trained professionals. Unlike human therapists, chatbots "aren't obligated to break confidentiality when confronted with the possibility of a patient harming themselves." Ethical limits and the black box effect Reiley highlighted the fundamental difference between AI and human therapists: "Most human therapists practise under a strict code of ethics that includes mandatory reporting rules as well as the idea that confidentiality has limits," she wrote. AI companions, she added, do not have their "own version of the Hippocratic oath". In Sophie's case, Reiley argued, the AI "helped her build a black box that made it harder for those around her to appreciate the severity of her distress." The concern is that AI can give users the illusion of support without the safeguards necessary to prevent real-world harm. Did AI lead Sophie to take such a drastic step? Internet divided The New York Times shared Reiley's op-ed on their official Instagram account, where responses were mixed. Many users expressed sympathy and highlighted systemic gaps in mental health care: "This is SUCH a painful reminder of the gaps in our systems of care. My heart goes out to Laura and all who loved Sophie. No chatbot can replace ACTUAL human connection and real professional help. We need to take stories like hers seriously and work to build stronger support for those who are struggling." Others pointed out accessibility issues: "ChatGPT is free. Therapy is not. We have several deep-rooted issues on our hands." Some defended the AI's actions, noting that it consistently recommends professional intervention: "Each time ChatGPT senses me getting too dark, it immediately recommends talking to a therapist and calling a helpline – every single time… I shared lab results with it, and it helped me craft questions for my discussions with my doctor… once I cross a line, it strongly recommends human intervention and calling a helpline." The limitations of AI in mental health Sophie's story highlights a sobering truth: even without harmful advice, chatbots can escalate dangers due to their lack of common sense and real-world judgement. Reiley emphasised this stark difference: "If Harry had been a flesh-and-blood therapist rather than a chatbot, he might have encouraged inpatient treatment or had Sophie involuntarily committed until she was in a safe place." She added, "Perhaps fearing those possibilities, Sophie held her darkest thoughts back from her actual therapist. Talking to a robot — always available, never judgemental — had fewer consequences." A cautionary tale for AI and mental health Sophie's experience is a reminder that AI, no matter how sophisticated, cannot replace human care, judgment, or intervention. While chatbots may provide comfort and conversation, they cannot enforce safety measures, recognise nuance, or act in crisis situations. Experts and families alike stress that AI should be viewed as a supplement, not a substitute, for trained mental health support.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store