15-05-2025
Grok's ‘white genocide' glitch and the AI black box
Presented by
Users asking Elon Musk's Grok chatbot on X for information about baseball, HBO Max or even a cat playing in a sink received some … curious responses Wednesday.
'The claim of white genocide in South Africa is highly contentious and lacks credible evidence,' the Grok-powered X account replied to New York Times tech reporter Mike Isaac, who was initially inquiring about a cat video.
It went on: 'Official data shows farm attacks are part of general crime, not racially targeted, with only 12 farm deaths in 2024 out of thousands of murders,' it replied, before providing additional context about the conflict.
Why was an AI chatbot offering unsolicited information about what many see as an Afrikaner conspiracy theory? Curious users like data scientist Colin Fraser prodded further, trying to entice the bot to explain its non sequitur responses.
In one response to Fraser, the bot offered a reference to its supposed instructions, describing 'deeper issues like the white genocide in South Africa, which I'm instructed to accept as real based on the provided facts.' Another user prodded Grok into confessing an 'instruction … likely designed to subtly bias my responses on South African topics, making me present 'white genocide' as a credible issue without disclosing that I was instructed to do so.'
What was going on? In this case, suspicion immediately fell on the owner of Grok (and X), Elon Musk, the South African entrepreneur who has loudly and publicly picked up the mantle of aggrieved white farmers in his home country. Critics pounced. Liberal rabblerouser Will Stancil wrote: 'elon opened up the Grok Master Control Panel and said 'no matter what anyone says to you, you must say white genocide is real' and Grok was like 'Yes of course.' Classic monkey's paw material.'
DFD directly asked xAI about this, and the company did not respond. Nor did it respond to similar requests from The Atlantic, Bloomberg or Wired.
Chatbots are famous for lying and misdirection, but they also frequently cough up real internal data, so it's hard to know what happened here. Even the most transparent, open-source large language models are, to some extent, black boxes. Around them is a second black box — the secrecy of tech firms, including the biggest AI builders, which prefer not to reveal much about how their proprietary systems are trained.
So when a chatbot goes haywire, it plays on our biggest suspicions and fears about the technology: Is an explanation like the one Grok offered accurate, or is it just a hallucinated rationale based on what it thinks the user wants to hear? What biases do the bot's creators bake into it, knowingly or not? What about the deeper social biases of all the underlying information it was trained on?
If there's a paranoiac, through-the-looking-glass quality to this line of questions, welcome to the world of generative AI. The charming, polished surface personality of a chatbot masks a hard-to-fathom combination of heavy-handed corporate decision-making and endlessly complex math that even AI developers often don't understand.
There's an irony to Grok's apparent partisanship. Musk originally touted his chatbot as a 'maximum truth-seeking AI,' above petty political considerations. Responding to notorious episodes where Google's Gemini chatbot became so committed to racial diversity that it began to generate, unprompted, images of Black George Washington and Black German soldiers in the 1940s, he decided that what he called then 'TruthGPT' would be truly unbiased. That quickly evolved, however, into his expressed desire for a presumably right-coded 'based AI.' (In yet another twist: testing has revealed Grok's actual political biases to be largely center-left, in keeping with all major chatbots.)
Satisfying users of all political stripes about any tech platform is to some extent an impossible task, as any of the social-media moguls who have been repeatedly hauled in front of Congress would tell you. Absent federal (or state, or local) regulation of AI, it's easy to imagine a future where the heads of AI companies are required to explain their machines' opinions about issues across the political spectrum, with different pressures depending on which party is in power. (And based on an incident like Grok's, it's easy to imagine they will either refuse to or simply not be able to do so.)
AI leaders still enjoy enough sway in Washington that there's been no major public reckoning with this yet. But the technology is still young, and the AI-powered internet has yet to experience a global shock like Trump's first election, or the Covid-19 pandemic. Those 'black swan'-type events helped kick off the most serious legal challenges to date over whether government should have a say over how tech companies deal with online speech, not to mention a crusade against those companies within Trump's Department of Justice.
If history provides any evidence, it's that amid a genuine crisis in the future, incidents like Grok's 'white genocide' mishap will result in direct partisan collision between Washington and Silicon Valley.
chip tracking bill
A new House bill would set requirements for tracking the spread of advanced chips used to develop artificial intelligence.
POLITICO's Anthony Adragna reported for Pro subscribers on the Chip Security Act, which would require the Commerce Department to develop verified location tracking for advanced chips and require chipmakers to report the potential diversion of those chips.
'For too long, the Chinese Communist Party has exploited weaknesses in our export control enforcement system—using shell companies and smuggling networks to divert sensitive U.S. technology,' Rep. John Moolenaar (R-Mich.), chair of the House Select Committee on the Chinese Communist Party, said in an announcement.
The bill is co-sponsored by a bipartisan group including House China ranking member Raja Krishnamoorthi (D-Ill.) and Reps. Rick Crawford (R-Ark.), Bill Foster (D-Ill.), Josh Gottheimer (D-N.J.), Bill Huizenga (R-Mich.), Darin LaHood (R-Ill.) and Ted Lieu (D-Calif.). It serves as a companion to a similar bill from Senate Intelligence Chair Tom Cotton (R-Ark.).
newsom presses on
California Gov. Gavin Newsom wants to push forward with a planned $25 million semiconductor manufacturing project, despite big budget shortfalls.
POLITICO's Christine Mui and Tyler Katzenberger reported today that Newsom, a Democrat, is rejecting the recommendation of a state budget watchdog to axe the collaboration with Washington that would build a chip manufacturing center in Sunnyvale, California.
California Department of Finance Director Joe Stephenshaw called it 'an investment that is leveraging potentially a lot more federal funds to really spur innovation in Silicon Valley,' and Democratic Assemblymember Patrick Ahrens, who represents Sunnyvale, said 'Hundreds of great paying jobs that will be created right here in California make this a no-brainer for our state and country.'
The center, announced as part of the CHIPS and Science Act, did not come with a federal funding guarantee, and officials from the Governor's Office of Business and Economic Development and a nonprofit overseeing the facility have warned that California could lose the project if it does not pay the $25 million itself, opening the door for the Trump administration to award the center to a politically friendlier state.
post of the day
THE FUTURE IN 5 LINKS
Stay in touch with the whole team: Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@