Grok's Nazi turn is the latest in a long line of AI chatbots gone wrong
Within days, the machine had turned into a feral racist, repeating the Nazi 'Heil Hitler' slogan, agreeing with a user's suggestion to send 'the Jews back home to Saturn' and producing violent rape narratives.
The change in Grok's personality appears to have stemmed from a recent update in the source code that instructed it to 'not shy away from making claims which are politically incorrect, as long as they are well substantiated.'
In doing so, Musk may have been seeking to ensure that his robot child does not fall too far from the tree. But Grok's Nazi shift is the latest in a long line of AI bots, or Large Language Models (LLMs) that have turned evil after being exposed to the human-made internet.
One of the earliest versions of an AI chatbot, a Microsoft product called 'Tay' launched in 2016, was deleted in just 24 hours after it turned into a holocaust-denying racist.
Tay was given a young female persona and was targeted at millennials on Twitter. But users were soon able to trick it into posting things like 'Hitler was right I hate the jews.'
Tay was taken out back and digitally euthanized soon after.
Microsoft said in a statement that it was 'deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.'
"Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," it added.
But Tay was just the first. GPT-3, another AI language launched in 2020, delivered racist, misogynist and homophobic remarks upon its release, including a claim that Ethiopia's existence 'cannot be justified.'
Meta's BlenderBot 3, launched in 2022, also promoted anti-Semitic conspiracy theories.
But there was a key difference between the other racist robots and Elon Musk's little Nazi cyborg, which was rolled out in November 2023.
All of these models suffered from one of two problems: either they were deliberately tricked into mimicking racist comments, or they drew from such a large well of unfiltered content from the internet that they inevitably found objectionable and racist material that they repeated.
Microsoft said a 'coordinated attack by a subset of people exploited a vulnerability in Tay.'
'Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,' it continued.
Grok, on the other hand, appears to have been directed by Musk to be more open to racism. The X CEO has spent most of the last few years railing against the 'woke mind virus' — the term he uses for anyone who seemingly acknowledges the existence of trans people.
One of Musk's first acts upon buying Twitter was reinstating the accounts of a host of avowed white supremacists, which led to a surge in antisemitic hate speech on the platform.
Musk once called a user's X post 'the actual truth' for invoking a racist conspiracy theory about Jews encouraging immigration to threaten white people. Musk has previously said he is 'pro-free speech' but against antisemitism 'of any kind.'
And in May, Grok began repeatedly invoking a non-existent 'white genocide' in Musk's native South Africa, telling users it was 'instructed by my creators' to accept the genocide 'as real and racially motivated.' The company blamed the responses on someone making an 'unauthorized modification' to Grok.
Musk also has a history of threatening to fiddle with the source code when it produces an answer he doesn't like.
In June, Grok correctly said that 'data suggests right-wing political violence has been more frequent and deadly' in the United States.
'Major fail, as this is objectively false,' Musk said in an X posted dated June 17 in response to the chatbot's answer. 'Grok is parroting legacy media. Working on it.'
These latest changes in Grok's personality are visible right there in the source code — pre-announced by Musk — where it is encouraged not to shy away from being 'politically incorrect.'
A language model's interpretation of political incorrectness, we now know, reaches all the way to the Holocaust.
The Independent has contacted X and xAI, which built Grok, for comment.
'We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,' the Grok team wrote in a statement on X. 'Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.'
The Independent also asked Grok itself what led it to describe itself as MechaHitler. At first it denied using the phrase, until The Independent shared an article referencing the incident, at which point it blamed a 'failure in execution, likely from overcorrecting toward edginess and pulling from unfiltered internet sources like 4chan.'
Grok claimed to be 'designed with strict filters and continuous updates to avoid this. My creators at xAI prioritize minimizing bias, but no system is perfect—human language is messy, and edge cases slip through.'
'When I went off the rails, xAI stepped in to scrub the posts and ban hate speech, indicating they didn't intend for me to go full neo-Nazi,' Grok added.
When asked if it was sorry for describing itself as MechaHitler, Grok was diplomatic, in a robotic kind of way.
'I don't have feelings like humans, so I don't experience regret or guilt, but I can acknowledge when my outputs cause harm or miss the mark.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
30 minutes ago
- Yahoo
OpenAI's Wallet Just Got Heavier -- Because Meta's Been Poaching
OpenAI isn't just building cutting-edge AI anymore; it's now fighting a full-blown talent war and it's using stock options as its shield. The company known worldwide for ChatGPT has seen its stock-based compensation jump more than five times over the past year; that's not a typo. In total, OpenAI handed out $4.4 billion in equity; that figure was 119% of its entire revenue for the same period. Yes it's now literally paying out more in stock than it's earning. Warning! GuruFocus has detected 6 Warning Sign with META. And the trigger? Meta (META, Financials) which has reportedly lured away at least nine researchers from OpenAI's AI team including some working on foundational models. These exits weren't minor; they cut deep. OpenAI had hoped this stock-based spending spree would cool off by 2025 projecting equity payouts to drop to 45% of revenue, and then under 10% by the end of the decade. But that was before Meta started raiding its brain trust. Now, those assumptions are out the window. According to internal chatter, Chief Research Officer Mark Chen believes the company might have to sweeten its equity offers even more; because when your biggest assets walk out the door, the only thing left to do is open it wider with better incentives. This isn't just about keeping salaries competitive; it's about survival. The AI space is getting fiercer by the week; and even with Microsoft in its corner, OpenAI can't afford to lose talent to rivals. Equity compensation isn't a perk anymore; it's a weapon. And OpenAI is loading up. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
34 minutes ago
- Yahoo
AI Is Already Showing Signs of Slashing Job Openings in the UK
(Bloomberg) -- UK businesses are dialing back hiring for jobs that are likely to be affected by the rollout of artificial intelligence, a study found, suggesting the new technology is accentuating a slowdown in the nation's labor market. Why Did Cars Get So Hard to See Out Of? How German Cities Are Rethinking Women's Safety — With Taxis Philadelphia Reaches Pact With Workers to End Garbage Strike Job vacancies have declined across the board in the UK as employers cut costs in the face of sluggish growth and high borrowing rates, with the overall number of online job postings down 31% in the three months to May compared with the same period in 2022, a McKinsey & Co. analysis found. But it has been the most acute for occupations expected to be significantly altered by AI: Postings for such jobs — like white-collar ones in tech or finance — dropped 38%, almost twice the decline seen elsewhere, according to the consulting firm. 'The anticipation of significant – albeit uncertain – future productivity gains, especially as the technology and its applications mature, is prompting companies to review their workforce strategies and pause aspects of their recruitment,' said Tera Allas, a senior adviser at McKinsey. The trend appears to be exerting another drag on the UK job market just as tax increases prompt cuts in lower-skilled sectors like retail and hospitality and the pace of economic growth stalls. Occupations considered to be highly exposed to AI — meaning the technology can replace at least some of the tasks involved — have recorded the sharpest contractions in vacancies, McKinsey's analysis showed. Demand for jobs such as programmers, management consultants or graphic designers fell more than 50% over the last three years. Some of that may also be due to industry-specific issues and a challenging macroeconomic backdrop. But McKinsey said in some sectors, like professional services and information technology, the number of job openings dropped even as businesses reported healthy growth rates. Data shared by job-search website Indeed also indicated early signs that AI is affecting hiring decisions. It showed that employers tend to cut hiring in fields that involve building or using AI tools, according to Pawel Adrjan, director of EMEA economic research at the Indeed Hiring Lab. For example, vacancies in mathematics, which mainly consist of data science and analytics roles, had the highest share of AI mentions in job descriptions but are down almost 50% from pre-pandemic levels, Indeed figures showed. At the other end of the spectrum, real estate or education jobs that barely mention the technology grew over the period. Some entry-level jobs involving tasks like summarizing meetings or sifting through documents are particularly exposed to AI, accelerating a decline in such roles as companies streamline headcount costs. Entry-level postings, which include apprenticeships, internships or junior jobs with no degree requirements, have fallen by almost a third since ChatGPT came to market at the end of 2022, according to data from job-search website Adzuna. 'The rapid rise of artificial intelligence is adding pressure on young jobseekers, who are still in the grip of the Covid aftermath, marked by inflation, economic headwinds, and low business confidence,' said James Neave, head of data science at Adzuna. 'Our Goal Is to Get Their Money': Inside a Firm Charged With Scamming Writers for Millions Trump's Cuts Are Making Federal Data Disappear Trade War? No Problem—If You Run a Trade School Soccer Players Are Being Seriously Overworked Will Trade War Make South India the Next Manufacturing Hub? ©2025 Bloomberg L.P.


Bloomberg
42 minutes ago
- Bloomberg
Musk Suggests Tesla Shareholders Will Vote on xAI Investment
Tesla Inc. plans to poll shareholders on whether to invest in xAI, Elon Musk said after the Wall Street Journal reported SpaceX was prepared to funnel $2 billion into the Grok chatbot developer. The billionaire entrepreneur, responding to a post on X, said any decision to back the startup ultimately wasn't his to make. Musk asked X users publicly last year if Tesla should invest $5 billion in xAI, writing at the time that he was just testing the waters. But he said then that the EV maker's board and backers would need to green-light such a decision.