logo
Google agrees to pay $28m in racial bias lawsuit

Google agrees to pay $28m in racial bias lawsuit

Yahoo19-03-2025
Google has agreed to pay $28m (£21.5m) to settle a lawsuit that claimed white and Asian employees were given better pay and career opportunities than workers from other ethnic backgrounds, a law firm representing claimants says.
The technology giant confirmed it had "reached a resolution" but rejected the allegations made against it.
The case filed in 2021 by former Google employee, Ana Cantu, said workers from Hispanic, Latino, Native American and other backgrounds started on lower salaries and job levels than their white and Asian counterparts.
The settlement has been given preliminary approval by Judge Charles Adams of the Santa Clara County Superior Court in California.
The case brought by Ms Cantu against Google relied on a leaked internal document, which allegedly showed that employees from some ethnic backgrounds reported lower compensation for similar work.
The practice of basing starting pay and job level on prior salaries reinforced historical race and ethnicity-based disparities, according to Ms Cantu's lawyers.
The class action lawsuit was filed for at least 6,632 people who were employed by Google between 15 February 2018 and 31 December 2024, according to Reuters news agency.
Cathy Coble, one of the lawyers representing them, praised the "bravery of both the diverse and ally Googlers who self-reported their pay and leaked that data to the media".
"Suspected pay inequity is too easily concealed without this kind of collective action from employees," Ms Coble added.
The technology giant denied that it had discriminated against any of its employees.
"We reached a resolution, but continue to disagree with the allegations that we treated anyone differently, and remain committed to paying, hiring, and levelling all employees fairly," a Google spokesperson told the BBC.
Earlier this year, Google joined a growing list of US firms that are abandoning commitments to principles of diversity, equity, and inclusion (DEI) in their recruitment policies.
Meta, Amazon, Pepsi, McDonald's, Walmart and others have also rolled back their DEI programmes.
It comes as US President Donald Trump and his allies have regularly attacked DEI policies.
Since his return to the White House, Trump has ordered government agencies and their contractors to eliminate such initiatives.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Walmart broadens 10% staff discount to include most grocery products, WSJ reports
Walmart broadens 10% staff discount to include most grocery products, WSJ reports

Yahoo

timean hour ago

  • Yahoo

Walmart broadens 10% staff discount to include most grocery products, WSJ reports

(Reuters) -Walmart has expanded its 10% employee discount to nearly all of its grocery items, as the retail giant looks to retain workers, the Wall Street Journal reported on Wednesday, citing a letter from chief people officer to the company's staff. The 10% discount, previously available on products such as fresh produce and general merchandise, now extends to almost all grocery purchases at its stores and online, effective immediately, according to the report. Walmart, the largest private employer in the country, did not immediately respond to a Reuters' request for comment. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

AI in IR: Opportunities, Risks, and What You Need to Know
AI in IR: Opportunities, Risks, and What You Need to Know

Business Wire

timean hour ago

  • Business Wire

AI in IR: Opportunities, Risks, and What You Need to Know

If there's one aspect of artificial intelligence that I can relate to as a communications strategist and former journalist, it's the fact that I've felt like a 'large language model' for most of my career. I don't mean model in terms of my physical attributes. I mean model in a way that describes how most generative AI tools process information and organize responses based on prompts. That's effectively what I've been doing in my career for nearly three decades! The good news is that platforms like OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude are extremely helpful when processing mass quantities of complicated information. Using these platforms to understand a concept or interpret text is like using a calculator to work through a math problem. And yet, many of us really don't know how these word crunchers work. This applies to AI tools used for investor relations, public relations, or anything else where an AI model could be prompted with sensitive information, which is then consumed by the public. Think about how many people working for public companies may inadvertently prompt ChatGPT with material nonpublic information (MNPI), which then informs a trader to ask the platform whether they should buy or sell a stock. AI Concerns Among IR Professionals Earlier this year, I worked with the University of Florida on a survey that found that 82 percent of IR professionals had concerns about disclosure issues surrounding AI use and MNPI. At the same time, 91 percent of survey respondents were worried about accuracy or bias, and 74 percent expressed data privacy concerns. These factors are enough for compliance teams to ban AI use altogether. But fear-mongering is shortsighted. There are plenty of ways to use AI safely, and understanding the basics of the technology, as well as its shortcomings, will make for more responsible and effective AI use in the future. Why You Should Know Where AI Gets Its Data One of the first questions someone should ask themselves when using a new AI platform is where the information is sourced. The acronym 'GPT' stands for generative pre-trained transformer, and that is a fancy way of saying that the technology can 'generate' information or words based on 'training' and data it received, which is then 'transformed' into sentences. This also means that every time someone asks one of these platforms a question or prompt, they are pumping information into a GPT. That makes these platforms even smarter when analyzing complex business models. For example, many IR folks get bogged down summarizing sell-side analysts' models and earnings forecasts from research notes. Simply upload those models into ChatGPT, and the platform does a great job understanding the contents and providing a digestible summary. Interested in analyzing the sentiment of a two-hour conference call script? How about uploading the script (post call to avoid MNPI) to Gemini and requesting a summary on what drew the most positive sentiment among investors? The Importance of AI Training and Education in IR But here's the rub: Only 25.4 percent of companies provided AI-related training in the past two years, according to the U.F. survey. This suggests a disconnect between advancing AI technology and people's understanding of how to use it. That means the onus is on us to figure it out. So, where to start? Many AI tools, including ChatGPT, have free versions that can help people summarize, plan, edit, and revise items. Google's NotebookLM, is an AI platform that allows you to create a GPT, so you know where the AI is sourcing the information from. NotebookLM can also create podcasts based on the information generated by its LLM. This could be helpful if a chief executive officer wants to take a run on a treadmill and listen to a summary of analysts' notes instead of having to read them in a tedious email. Here are some other quick-hit ideas: Transcribing notes. If you're like me, you still prefer using a pen and pad when taking notes. You can take a picture of those notes, upload them to ChatGPT, and have it transcribed into text. Planning investor days. If you can prompt an AI with the essentials – the who, what, when, where, why, and how of the event – it can provide a thorough outline that makes you look smart and organized when sending it around to the team. Analyzing proxy battles. Proxy fights are always challenging, especially when parsing the needs and wants of key stakeholders, including activists, media, management teams, and board members. Feeding an AI with publicly available information (to, again, avoid disclosure issues) can help IR and comms professionals formulate a strategy. Crafting smarter AI prompts. Writing effective prompts requires some finesse. The beauty of AI is that it can help you refine your prompts, leading to better information gathering. Try asking ChatGPT the following question: 'If Warren Buffet is interested in investing in a company, what would be an effective AI prompt to understand its return on investment?' There are many other use cases that can help eliminate mundane tasks, allowing for humans to focus more on strategy. But in order to use AI effectively, it's important to know the reason you're using it. Perhaps, it's demonstrating to management that being an early adopter of this technology is important to help a company differentiate itself. Building a Responsible AI Policy for Your Organization Before implementing any AI initiatives, it's best to formulate an AI policy that organizations can adopt for internal and external use. Most companies are lacking these policies, which are critical for establishing the basic ground rules for AI use. I helped co-author the National Investor Relations Institute's AI policy, which recommends the following: The IR professional should be an educated voice within the company on the use of AI in IR, and this necessitates becoming knowledgeable about AI. The IR professional should understand the pace at which their company is adopting AI capabilities and be prepared to execute their IR-AI strategy based on management's expectations. Avoid Regulation Fair Disclosure (Reg FD) violations. The basic tenet is to never put MNPI into any AI tool unless the tool has the requisite security, as defined or required by the company's security experts, and has been explicitly approved for this particular use by company management. AI Will Not Replace You. But Someone Using AI Might. There is this prevailing fear that somehow AI is going to take over the world. But the technology is not likely going to replace your job. It's smart users of the technology who will likely replace your job. AI is transforming how IR professionals work, but using it responsibly starts with understanding how it works. From summarizing complex reports to enhancing stakeholder communication, AI can be a powerful tool when used thoughtfully. Start by learning the basics, implementing clear policies, and exploring trusted tools to unlock its full potential.

Gemini will remember more (or less) of what you say
Gemini will remember more (or less) of what you say

Engadget

timean hour ago

  • Engadget

Gemini will remember more (or less) of what you say

Google is adding a temporary chat feature to Gemini. The equivalent of a browser's incognito mode, it lets you have one-off AI chats. They won't appear in your history, influence future chats or be used for training. The temporary chats will be saved for up to 72 hours. Google says this is to give you time to revisit the chat or provide feedback. The feature begins rolling out today and will continue to do so over the coming weeks. It arrives alongside a new setting that does, well, pretty much the opposite. The Gemini app can now learn from your conversations and remember details and preferences. It may then reference them in future chats. (For example, it might recall a hobby you once mentioned when you later ask it for party theme ideas.) Google added the past chats feature to Gemini Advanced earlier this year. ChatGPT and Claude each have a similar memory option. The memory setting is on by default, so you'll want to tweak your privacy settings as soon as it arrives if you don't want to use it. In the Gemini app, head to Settings > Personal context > Your past chats with Gemini to change it. Screenshots in the Gemini app (phone and tablet), showing personal context settings. (Google) Speaking of settings, Google is changing the name of its data-retention toggle. What was once "Gemini Apps Activity" is now labeled as "Keep Activity." Despite the semantic change, your previous setting will stick, so you shouldn't need to change this one. Personalized conversations will first launch with Gemini 2.5 Pro in "select countries." It will make its way to 2.5 Flash and more regions in the weeks ahead.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store