Trump wants to end tech industry's 'woke AI' efforts to address bias
After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products.
In the White House and the Republican-led Congress, "woke AI' has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to 'advance equity' in AI development and curb the production of 'harmful and biased outputs' are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee.
And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and 'responsible AI' in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on 'reducing ideological bias' in a way that will 'enable human flourishing and economic competitiveness,' according to a copy of the document obtained by The Associated Press.
In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work.
But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive.
Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to 'see' and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light.
'Black people or darker skinned people would come in the picture and we'd look ridiculous sometimes,' said Monk, a scholar of colorism, a form of discrimination based on people's skin tones and other features.
Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients.
'Consumers definitely had a huge positive response to the changes,' he said.
Now Monk wonders whether such efforts will continue in the future. While he doesn't believe that his Monk Skin Tone Scale is threatened because it's already baked into dozens of products at Google and elsewhere — including camera phones, video games, AI image generators — he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone.
'Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune," Monk said. 'But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there's a lot of pressure to get to market very quickly.'
Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden's administration 'coerced or colluded with" them to censor lawful speech.
Michael Kratsios, director of the White House's Office of Science and Technology Policy, said at a Texas event this month that Biden's AI policies were 'promoting social divisions and redistribution in the name of equity.'
The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: 'Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities.'
Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias.
One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field.
Face-matching software for unlocking phones misidentified Asian faces. Police in U.S. cities wrongfully arrested Black men based on false face recognition matches. And a decade ago, Google's own photos app sorted a picture of two Black people into a category labeled as 'gorillas.'
Even government scientists in the first Trump administration concluded in 2019 that facial recognition technology was performing unevenly based on race, gender or age.
Biden's election propelled some tech companies to accelerate their focus on AI fairness. The 2022 arrival of OpenAI's ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up.
Then came Google's Gemini AI chatbot — and a flawed product rollout last year that would make it the symbol of 'woke AI' that conservatives hoped to unravel. Left to their own devices, AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual data they were trained on.
Google's was no different, and when asked to depict people in various professions, it was more likely to favour lighter-skinned faces and men, and, when women were chosen, younger women, according to the company's own public research.
Google tried to place technical guardrails to reduce those disparities before rolling out Gemini's AI image generator just over a year ago. It ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of men in 18th century attire who appeared to be Black, Asian and Native American. Google quickly apologised and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right.
With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used an AI summit in Paris in February to decry the advancement of 'downright ahistorical social agendas through AI,' naming the moment when Google's AI image generator was 'trying to tell us that George Washington was Black, or that America's doughboys in World War I were, in fact, women.'
'We have to remember the lessons from that ridiculous moment,' Vance declared at the gathering. "And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech.'
A former Biden science adviser who attended that speech, Alondra Nelson, said the Trump administration's new focus on AI's 'ideological bias' is in some ways a recognition of years of work to address algorithmic bias that can affect housing, mortgages, health care and other aspects of people's lives.
'Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognise and are concerned about the problem of algorithmic bias, which is the problem that many of us have been worried about for a long time,' said Nelson, the former acting director of the White House's Office of Science and Technology Policy who co-authored a set of principles to protect civil rights and civil liberties in AI applications.
But Nelson doesn't see much room for collaboration amid the denigration of equitable AI initiatives.
'I think in this political space, unfortunately, that is quite unlikely,' she said. 'Problems that have been differently named — algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other —- will be regrettably seen us as two different problems.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
8 minutes ago
- Time of India
Pentagon dismisses 'pizza index' in connection with Israel's attack on Iran, says they have 'sushi, sandwiches, coffee'
Pentagon dismisses pizza index report of June 13 when Israel attacked Iran and said they don't need to order pizza from outside. A Pentagon spokesperson reacted to the Pizza delivery index of June 13 when Israel launched military attacks in Iran and said they do not need to order pizza as there are many pizza options inside Pentagon and also they have sushi, sandwiches, donuts, etc. So, in a bizarre tracking, a surge in pizza orders around the Pentagon was noticed when the attack took place. "Nearly all pizza establishments near the Pentagon have experienced a HUGE surge in activity," the X account of Pentagon Pizza Report said. The account's post included screenshots of Google data showing the popularity of pizza joints in Washington, D.C. Domino's, District Pizza Palace and We, the Pizza were all dramatically above what was typical for that time of the day. The Pentagon Pizza Report also said a gay bar close to the Pentagon had "abnormally low traffic for a Thursday night" as Israeli strikes were underway. How are pizza orders linked to major geopolitical events The Pentagon Pizza Index, also known as the Pentagon Pizza Meter, is a quirky, unofficial theory suggesting that surges in pizza or takeout food orders to government buildings like the Pentagon, White House, or CIA can signal impending major geopolitical events or crises. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch vàng CFDs với sàn môi giới tin cậy IC Markets Tìm hiểu thêm Undo The idea is that when government officials and staff work late into the night on urgent matters—such as military operations, coups, or international conflicts—they order quick, convenient food like pizza to sustain long hours in the office. The concept traces back to the Cold War, when Soviet intelligence reportedly monitored food delivery patterns to U.S. government buildings as a form of open-source intelligence (OSINT), dubbed 'Pizzint' (Pizza Intelligence). According to The Guardian, deliveries of pizzas to the Pentagon doubled immediately before the 1989 US invasion of Panama and the 1991 Kuwait liberation campaign called Operation Desert Storm. The Takeout, a food and culture site, reported in January that while there are a number of eateries in the Pentagon—where almost 30,000 people work each day, according to Arlington National Cemetery Tours—it doesn't have its own pizzeria. Pentagon dismissed the spike reported this time and said they have plenty of pizza options inside and the timeline set out by the Pentagon Pizza Report did not align with the events, Newsweek reported.


Time of India
13 minutes ago
- Time of India
BJP sending Haryana CM Nayab Singh Saini to divide society in Punjab, says LoP Partap Singh Bajwa
Chandigarh: Punjab leader of opposition Partap Singh Bajwa on Sunday accused BJP of "attempting to create divisions within Punjab's society by sending Nayab Singh Saini to campaign in Ludhiana". Tired of too many ads? go ad free now Bajwa said Saini's very presence in Punjab was "a cruel reminder of BJP's anti-Punjab policies and its continued efforts to weaken the state's unity". Condemning Saini for his "repeated attacks on Punjab and its people ever since assuming power in Haryana", Bajwa, in a written statement, said, "The people of Punjab have not forgotten the tragedy in which Haryana Police fired upon peaceful farmers protesting for their legitimate rights. " The Congress leader also highlighted "the killing of 22-year-old farmer Shubhkaran Singh, who was shot by Haryana police while protesting at Khanauri (Punjab-Haryana border) in February 2024." He said BJP, through leaders like Saini, made several attempts to snatch Punjab's waters, ignoring the legal, historical, and moral rights of Punjabis. "Such moves directly attack the livelihood of our farmers and the very backbone of Punjab's economy," he said. Accusing Saini of "politicising" the Bhakra Beas Management Board (BBMB) water-sharing issue, Bajwa stated, "Saini brazenly accused Punjab of denying Haryana even drinking water. "


Time of India
28 minutes ago
- Time of India
BJP to launch caste census campaign
Lucknow: The proposed caste enumeration exercise scheduled alongside the national Census in 2027 is set to become the centrepiece of an elaborate ground-level campaign which the BJP plans to undertake in politically crucial Uttar Pradesh. The initiative, sources said, will aim to bolster the party's support among the Other Backward Classes (OBC) ahead of the 2027 state assembly elections. Party leaders were directed to initiate dialogues with various OBC communities to inform them about the anticipated benefits of caste-based enumeration. These include the collection of data to enable targeted welfare programmes, identification of disparities in resource distribution and improved policymaking on affirmative action. Experts say the data will also provide deeper insight into the state's social structure and help promote inclusivity and equality. UP backward class welfare minister and state party chief of OBC Morcha, Narendra Kashyap, told TOI that the campaign, which will be carried out for at least a year, would reach out to all OBC sub-castes which have otherwise remained deprived of privileges, including reservation benefits. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo "The party will hold chaupals and workshops to reach out to the OBC communities," Kashyap said. The use of chaupals and workshops, analysts said, potentially suggests a bottom-up approach, with the BJP's bid to build narratives not just from above but by directly engaging with marginalised – but electorally significant – OBC communities at the grassroots level. The caste census has been a major demand of several opposition parties, essentially the Samajwadi Party and Congress. In fact, the SP's Pichhda, Dalit, Alpsankhyak (PDA) campaign — centred on caste census and social justice — is widely said to have dented the BJP while reducing its tally in the 2024 Lok Sabha elections. Experts said that by initiating its own campaign, the BJP aims to reposition itself on the issue to avoid ceding any political ground in the run-up to the panchayat elections due next year and subsequently the 2027 UP polls. The party also aims to reconnect with OBC communities, many of which have traditionally aligned with regional players like the SP and the Mayawati-led BSP. Experts suggest that the BJP will seek to steer the caste census narrative toward development and national integration. This includes showcasing its existing welfare schemes for backward castes and linking the census with its broader governance agenda. BJP, as a matter of fact, has also been wheeling out a sharp OBC narrative by invoking historical figures from the backward class communities – most recently, Maharaja Suhaildev and Ahilyabai Holkar. Meanwhile, the opposition is also pulling up socks to make deeper inroads among the OBC communities. After making an effective presence as an ally of the Samajwadi Party in the 2024 Lok Sabha elections, the Congress party plans to go the extra mile in wooing the OBC. Congress plans to hold 'Bhagidari Nyay Sammelan' in every district from June 14 to July 15.