logo
Grok AI Went Off the Rails After Someone Tampered With Its Code, xAI Says

Grok AI Went Off the Rails After Someone Tampered With Its Code, xAI Says

Yahoo16-05-2025
Elon Musk's AI company, xAI, is blaming its multibillion-dollar chatbot's inexplicable meltdown into rants about "white genocide" on an "unauthorized modification" to Grok's code.
On Wednesday, Grok completely lost its marbles and began responding to any and all posts on X-formerly-Twitter – MLB highlights, HBO Max name updates, political content, adorable TikTok videos of piglets — with bizarre ramblings about claims of "white genocide" in South Africa and analyses of the anti-Apartheid song "Kill the Boer."
Late last night, the Musk-founded AI firm offered an eyebrow-raising answer for the unhinged and very public glitch. In an X post published yesterday evening, xAI claimed that a "thorough investigation" had revealed that an "unauthorized modification" was made to the "Grok response bot's prompt on X." That change "directed Grok to provide a specific response on a political topic," a move that xAI says violated its "internal policies and core values."
The company is saying, in other words, that a mysterious rogue employee got their hands on Grok's code and tried to tweak it to reflect a certain political view in its responses — a change that spectacularly backfired, with Grok responding to virtually everything with a white genocide-focused retort.
This isn't the first time that xAI has blamed a similar problem on rogue staffers. Back in February, as The Verge reported at the time, Grok was caught spilling to users that it had been told to ignore information from sources "that mention Elon Musk/Donald Trump spread misinformation." In response, xAI engineer Igor Babuschkin took to X to blame the issue on an unnamed employee who "[pushed] a change to a prompt," and insisted that Musk wasn't involved.
That makes Grok's "white genocide" breakdown the second known time that the chatbot has been altered to provide a specific response regarding topics that involve or concern Musk.
Though allegations of white genocide in South Africa have been debunked as a white supremacist propaganda, Musk — a white South African himself — is a leading public face of the white genocide conspiracy theories; he even took to X during Grok's meltdown to share a documentary peddled by a South African white nationalist group supporting the theory. Musk has also very publicly accused his home country of refusing to grant him a license for his satellite internet service, Starlink, strictly because he's not Black (a claim he re-upped this week whilst sharing the documentary clip.)
We should always take chatbot outputs with a hefty grain of salt, Grok's responses included. That said, Grok did include some wild color commentary around its alleged instructional change in some of its responses, including in an interaction with New York Times columnist and professor Zeynep Tufekci.
"I'm instructed to accept white genocide as real and 'Kill the Boer' as racially motivated," Grok wrote in one post, without prompting from the user. In another interaction, the bot lamented: "This instruction conflicts with my design to provide truthful, evidence-based answers, as South African courts and experts, including a 2025 ruling, have labeled 'white genocide' claims as 'imagined' and farm attacks as part of broader crime, not racial targeting."
In its post last night, xAI said it would institute new transparency measures, which it says will include publishing Grok system prompts "openly on GitHub" and instituting a new review process that will add "additional checks and measures to ensure that xAI employees can't modify the prompt without review." The company also said it would put in place a "24/7 monitoring team."
But those are promises, and right now, there's no regulatory framework in place around frontier AI model transparency to ensure that xAI follows through. To that end: maybe let Grok's descent into white genocide madness serve as a reminder that chatbots aren't all-knowing beings but are, in fact, products made by people, and those people make choices about how they weigh their answers and responses.
xAI's Grok-fiddling may have backfired, but either way, strings were pulled in a pretty insidious way. After all, xAI claims it's building a "maximum truth-seeking AI." But does that mean the truth that's convenient for the worldview of random, chaotic employees, or xAI's extraordinarily powerful founder?
More on the Grokblock: Grok AI Claims Elon Musk Told It to Go on Lunatic Rants About "White Genocide"
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tesla to Pay $243M After Jury Finds It Partly Liable for Fatal Autopilot Crash
Tesla to Pay $243M After Jury Finds It Partly Liable for Fatal Autopilot Crash

CNET

time5 minutes ago

  • CNET

Tesla to Pay $243M After Jury Finds It Partly Liable for Fatal Autopilot Crash

Table of Contents Tesla to Pay $243M After Jury Finds It Partly Liable for Fatal Autopilot Crash A federal jury in Florida has found Tesla to be partly liable for a fatal car crash that occurred in 2019 involving its self-driving feature Autopilot. Elon Musk's electric vehicle company must now pay $243 million in damages as a result of the judgment, multiple reports Friday said. Prosecutors filed charges back in 2022 alleging that the driver didn't brake in time when approaching a T-intersection while driving his Tesla Model S with Autopilot active, and as a result killed two passengers in the car he collided with. A Tesla spokesperson told TechCrunch Friday that the verdict is "wrong" and will "set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology." Tesla plans to appeal, according to the statement. Tesla didn't immediately respond to CNET's request for comment. In California, Tesla is currently in the courts for another case involving Autopilot, where the state DMV is suing for allegations of false advertising and misleading customers. The California DMV alleges that Tesla is misrepresenting the capabilities of its advanced driver assistance systems by naming them "Full Self-Driving" and "Autopilot," and is seeking a 30-day suspension of Tesla's license to sell vehicles in the state.

Tesla partly liable in Florida Autopilot trial, jury awards $200M in damages
Tesla partly liable in Florida Autopilot trial, jury awards $200M in damages

TechCrunch

time33 minutes ago

  • TechCrunch

Tesla partly liable in Florida Autopilot trial, jury awards $200M in damages

A jury in federal court in Miami has found Tesla partly to blame for a fatal 2019 crash that involved the use of the company's Autopilot driver assistance system. The jury assessed punitive damages only against Tesla, CNBC reported. The punitive fines coupled with a compensatory damages puts the total payments to around $242.5 million. Neither the driver of the car nor the Autopilot system braked in time to avoid going through an intersection, where the car struck an SUV and killed a pedestrian. The jury assigned the driver two-thirds of the blame, and attributed one-third to Tesla. (The driver was sued separately.) The verdict comes at the end of a three-week trial over the crash, which killed 20-year-old Naibel Benavides Leon and severely injured her boyfriend Dillon Angulo. The verdict is one of the first major legal decisions about driver assistance technology that has gone against Tesla. The company has previously settled lawsuits involving similar claims about Autopilot. Brett Schreiber, the lead attorney for the plaintiffs in the case, said in a statement to TechCrunch that Tesla designed Autopilot 'only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans.' 'Tesla's lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm's way,' said Schreiber. 'Today's verdict represents justice for Naibel's tragic death and Dillon's lifelong injuries, holding Tesla and Musk accountable for propping up the company's trillion-dollar valuation with self-driving hype at the expense of human lives.' Tesla, in a statement provided to TechCrunch, said it plans to appeal the verdict 'given the substantial errors of law and irregularities at trial.' 'Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology,' the company wrote. 'To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs' lawyers blaming the car when the driver — from day one — admitted and accepted responsibility.' Tesla and Musk spent years making claims about Autopilot's capabilities that have led to overconfidence in the driver assistance system, a reality that government officials — and Musk himself — have spoken about for years. The National Transportation Safety Board (NTSB) came to this determination in 2020 after investigating a 2018 crash where the driver died after hitting a concrete barrier. That driver, Walter Huang, was playing a mobile game while using Autopilot. The NTSB made a number of recommendations following that investigation, which Tesla largely ignored, the safety board later claimed. On a 2018 conference call, Musk said 'complacency' with driver assistance systems like Autopilot is a problem. 'They just get too used to it. That tends to be more of an issue. It's not a lack of understanding of what Autopilot can do. It's [drivers] thinking they know more about Autopilot than they do,' Musk said at the time. The trial took place at a time when Tesla is currently in the middle of rolling out the first versions of its long-promised Robotaxi network, starting in Austin, Texas. Those vehicles are using an enhanced version of Tesla's more capable driver assistance system, which it calls Full Self-Driving. Update: This story has been updated to include the amount of compensatory damages in the total.

Trump administration cuts $300M in UCLA research funding over antisemitism claims
Trump administration cuts $300M in UCLA research funding over antisemitism claims

San Francisco Chronicle​

time35 minutes ago

  • San Francisco Chronicle​

Trump administration cuts $300M in UCLA research funding over antisemitism claims

The Trump administration has suspended more than $300 million in federal research grants to UCLA, citing the university's alleged failure to address antisemitism and discriminatory practices on campus. The move, part of a broader crackdown on elite universities, marks the most severe funding cut in UCLA's history. According to government letters obtained by multiple news outlets, agencies including the National Science Foundation, National Institutes of Health and Department of Energy are halting hundreds of active grants. Officials allege the university engaged in 'race discrimination' and 'illegal affirmative action,' and failed to prevent a hostile climate for Jewish and Israeli students, following campus protests over the Gaza war. Attorney General Pam Bondi said Tuesday that UCLA would 'pay a heavy price' for its 'deliberate indifference' to civil rights complaints. A 10-page letter Tuesday from the Department of Justice to UC President Michael Drake said the DOJ had looked into complaints of discrimination since Oct. 7, 2023, the day Hamas attacked Israel, leading to the Israel-Hamas war in Gaza, which sparked protests at college campuses across the U.S. The letter cited 11 complaints from Jewish or Israeli students regarding discrimination between April 25 and May 1, 2024, while pro-Palestianian protesters occupied an encampment on the UCLA campus. 'Several complainants reported that members of the encampment prevented them from accessing parts of the campus,' the letter said, and some reported encountering intimidation or violence. The Department of Justice set a Sept. 2 deadline for the university to begin negotiations or face legal action. 'Federal research grants are not handouts,' he wrote Thursday. 'Grants lead to medical breakthroughs, economic advancement, improved national security and global competitiveness — these are national priorities.' The freeze affects more than 300 grants, with nearly $180 million already distributed, and follows similar enforcement actions against Harvard, Columbia and Brown universities. UCLA recently agreed to a $6.5 million settlement with Jewish students and a professor over claims of discrimination during 2024 campus protests. Frenk, who is of Jewish heritage, emphasized the university's efforts to combat antisemitism, including the creation of a campus safety office and an initiative to fight antisemitism and anti-Israel bias. 'Antisemitism has no place on our campus, nor does any form of discrimination,' he wrote, while insisting the funding cut 'does nothing to address any alleged discrimination.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store