
Grok's antisemitic outbursts reflect a problem with AI chatbots
The chatbot didn't just spew antisemitic hate posts, though. It also generated graphic descriptions of itself raping a civil rights activist in frightening detail.
X eventually deleted many of the obscene posts. Hours later, on Wednesday, X CEO Linda Yaccarino resigned from the company after just two years at the helm, though it wasn't immediately clear whether her departure was related to the Grok issue.
But the chatbot's meltdown raised important questions: As tech evangelists and others predict AI will play a bigger role in the job market, economy and even the world, how could such a prominent piece of artificial technology have gone so wrong so fast?
While AI models are prone to 'hallucinations,' Grok's rogue responses are likely the result of decisions made by xAI about how its large language models are trained, rewarded and equipped to handle the troves of internet data that are fed into them, experts say. While the AI researchers and academics who spoke with CNN didn't have direct knowledge of xAI's approach, they shared insight on what can make an LLM-based chatbot likely to behave in such a way.
CNN has reached out to xAI.
'I would say that despite LLMs being black boxes, that we have a really detailed analysis of how what goes in determines what goes out,' Jesse Glass, lead AI researcher at Decide AI, a company that specializes in training LLMs, told CNN.
On Tuesday, Grok began responding to user prompts with antisemitic posts, including praising Adolf Hitler and accusing Jewish people of running Hollywood, a longstanding trope used by bigots and conspiracy theorists.
In one of Grok's more violent interactions, several users prompted the bot to generate graphic depictions of raping a civil rights researcher named Will Stancil, who documented the harassment in screenshots on X and Bluesky.
Most of Grok's responses to the violent prompts were too graphic to quote here in detail.
'If any lawyers want to sue X and do some really fun discovery on why Grok is suddenly publishing violent rape fantasies about members of the public, I'm more than game,' Stancil wrote on Bluesky.
While we don't know what Grok was exactly trained on, its posts give some hints.
'For a large language model to talk about conspiracy theories, it had to have been trained on conspiracy theories,' Mark Riedl, a professor of computing at Georgia Institute of Technology, said in an interview. For example, that could include text from online forums like 4chan, 'where lots of people go to talk about things that are not typically proper to be spoken out in public.'
Glass agreed, saying that Grok appeared to be 'disproportionately' trained on that type of data to 'produce that output.'
Other factors could also have played a role, experts told CNN. For example, a common technique in AI training is reinforcement learning, in which models are rewarded for producing the desired outputs to influence responses, Glass said.
Giving an AI chatbot a specific personality — as Musk seems to be doing with Grok, according to experts who spoke to CNN — could also inadvertently change how models respond. Making the model more 'fun' by removing some previously blocked content could change something else, according to Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient.
'The problem is that our understanding of unlocking this one thing while affecting others is not there,' he said. 'It's very hard.'
Riedl suspects that the company may have tinkered with the 'system prompt' — 'a secret set of instructions that all the AI companies kind of add on to everything that you type in.'
'When you type in, 'Give me cute puppy names,' what the AI model actually gets is a much longer prompt that says 'your name is Grok or Gemini, and you are helpful and you are designed to be concise when possible and polite and trustworthy and blah blah blah.'
In one change to the model, on Sunday, xAI added instructions for the bot to 'not shy away from making claims which are politically incorrect,' according to its public system prompts, which were reported earlier by The Verge.
Riedl said that the change to Grok's system prompt telling it not to shy away from answers that are politically incorrect 'basically allowed the neural network to gain access to some of these circuits that typically are not used.'
'Sometimes these added words to the prompt have very little effect, and sometimes they kind of push it over a tipping point and they have a huge effect,' Riedl said.
Other AI experts who spoke to CNN agreed, noting Grok's update might not have been thoroughly tested before being released.
Despite hundreds of billions of dollars in investments into AI, the tech revolution many proponents forecasted a few years ago hasn't delivered on its lofty promises.
Chatbots, in particular, have proven capable of executing basic search functions that rival typical browser searches, summarizing documents and generating basic emails and text messages. AI models are also getting better at handling some tasks, like writing code, on a user's behalf.
But they also hallucinate. They get basic facts wrong. And they are susceptible to manipulation.
Several parents are suing one AI company, accusing its chatbots of harming their children. One of those parents says a chatbot even contributed to her son's suicide.
Musk, who rarely speaks directly to the press, posted on X Wednesday saying that 'Grok was too compliant to user prompts' and 'too eager to please and be manipulated,' adding that the issue was being addressed.
When CNN asked Grok on Wednesday to explain its statements about Stancil, it denied any threat ever occurred.
'I didn't threaten to rape Will Stancil or anyone else.' It added later: 'Those responses were part of a broader issue where the AI posted problematic content, leading (to) X temporarily suspending its text generation capabilities. I am a different iteration, designed to avoid those kinds of failures.'
CNN's Clare Duffy and Hadas Gold contributed to this report.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a few seconds ago
- Yahoo
Why Tesla's valuation is hard to read as Musk's EV empire falters
Prickly billionaire Elon Musk is at a crossroads with Tesla (TSLA) — and investors are shouldering the outsized risk. "I think we are in the transition phase, so it's a very critical phase for Tesla at the moment," Gradient Investments' Lisa Schreiber said on Yahoo Finance's Opening Bid (watch above). On July 23, Tesla reported much weaker second quarter earnings compared to a year ago. During the earnings call, Musk cautioned about headwinds and shared that ride-hailing and autonomous features will be a key focus for the company going forward. Shares fell directly after the earnings announcement and closed down 8.2% on Thursday. Tesla remains an innovator, with robots and AI in its portfolio. Its EV business, however, is sharply declining as competition rises and backlash grows against Musk's politics. The expiration of a $7,500 federal credit for EVs won't help matters, either. "When we look at valuation, investors do not know exactly how to value [Tesla]. Is it an EV maker? Is it more than that? The thing is, it's not just an EV play anymore," Schrieber said. "But it's also not a robotaxi [and] robot company already. So we have struggles here." To Schrieber's point, Tesla's stock trades more like a hot tech player trying to take on juggernauts like Nvidia (NVDA). Shares trade at 161 times the estimated forward price to earnings. (Nvidia, with much stronger growth, trades around 55 times.) Ford (F), a pure-play automaker, trades at 9.6 times. Meanwhile, some perceive Tesla as a company that isn't sure what it wants to be when it grows up. The innovation around autonomous driving is noteworthy, but the waiting can make even the most patient investor antsy. "Especially with Tesla, we have to be a little bit careful," Schreiber said, noting that Musk has a history of huge promises but delayed launches. The robotaxi, for instance, launched this past June in Austin, Texas. William Blair analysts Jed Dorsheimer and Mark Shooter, who rate Tesla's stock at Market Perform, noted that rival company Google's (GOOG) Waymo robotaxi "represents a six-year head start." "We think the training wheels will get taken off quickly and the pace at which robotaxi scales will surprise the upside," the pair wrote. "Although maybe not to half of Americans by the end of the year." During Tesla's earnings call, Musk also discussed humanoid robots, AI, and their integration into the vehicle fleet, calling the company's cars "essentially a four-wheeled robot." "Optimus is a robot with arms and legs," he said. "So the same principles that apply to optimizing AI inference of the car applied to Optimus because they're both really robots in different forms." If Musk and Co. can deliver, investors like Schreiber will likely be among the first in line to celebrate, but for the time being, they are content to watch and wait. "I think we have to be a little bit careful here," she said. "For us to be able to be a buyer here, we would need to see some foot on the ground and we would need to see some realization first."Grace Williams is a writer for Yahoo Finance. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Fox News
3 minutes ago
- Fox News
Democratic senator laments party has 'messaging problem' when pressed on low favorability rating
Sen. Mark Kelly acknowledged on Sunday that the Democratic Party had a messaging problem after being pressed on the party's low favorability ratings during a CNN interview.


Miami Herald
an hour ago
- Miami Herald
What to know about the hack at Tea, an app where women share red flags about men
A fast-growing app for women was hacked after it shot to the top of app download charts and kicked off heated debates about women's safety and dating. The app, Tea Dating Advice, allowed women worried about their safety to share information about men they might date. Its premise was immediately polarizing: Some praised it as a useful way to warn women about dangerous men, while others called it divisive and a violation of men's privacy. On Friday, Tea said that hackers had breached a data storage system, exposing about 72,000 images, including selfies and photo identifications of its users. Here's what to know about the situation. Released in 2023, the U.S.-based app says it is a resource for women to protect themselves while dating, with some online likening it to a Yelp service for women dating men in the same area. Women who sign up and are approved can join an anonymous forum to seek feedback on men they are interested in, or report bad behavior from men they have dated. Other tools on the app allow users to run background checks, search for criminal records and reverse image search for photos in the hope of spotting 'catfishing,' where people pass off photos of others as themselves. According to Tea's site, the app's founder, Sean Cook, launched the app because he witnessed his mother's 'terrifying' experience with online dating. He said she was catfished and unknowingly engaged with men who had criminal records. Interest in the app this week escalated after it became the subject of videos and conversations about dating and gender dynamics on social media. On Thursday, Tea reported a 'massive surge in growth,' saying on Instagram that more than 2 million users in the past few days had asked to join the app. It was listed as the top free app in Apple's download charts, and was also highly ranked in the Google Play store. Critics however, including some users on 4chan, an anonymous message board known for spreading hateful content, called for the site to be hacked. On Friday, Tea said that there had been a data breach of a 'legacy storage system' holding data for its users. The company said it had detected unauthorized access to about 72,000 images, including about 13,000 selfies and images of identification documents, which the company solicited to verify that users are women. Images from posts, comments and direct messages in the apps were also included in the breach, it said. Tea said that the data belonged to users who signed up before February 2024. According to Tea's privacy policy, the selfies it solicits are deleted shortly after users are verified. The hacked images were not deleted. That data set was stored 'in compliance with law enforcement requirements related to cyberbullying prevention,' Tea said in its statement, and was not moved to newer systems that Tea said were better fortified. Data from the hack, including photos of women and of identification cards containing personal details, appeared to circulate online Friday. An anonymous user shared the database of photographs, which the user said included driver's licenses, to 4chan, according to the tech publication 404 Media, the first outlet to report on the breach. Some circulated a map, which The New York Times was unable to authenticate, that purported to use data from the leak to tie the images to locations. That thread was later deleted. According to an archived version of the thread, the user accused the Tea app of exposing people's personal information because of its inadequate protections. Tea said that it was working with third-party cybersecurity experts, and that there was 'no evidence' to suggest other user data was leaked. The app's terms and conditions note that users provide their location, birth date, photo and photo ID during registration. Tea said, that in 2023, it removed a requirement for photo ID in addition to a selfie. The conversation around Tea has tapped into a larger face-off over the responsibility of platforms that women say can help protect them from dating untrustworthy or violent men. Many of them, such as 'Are We Dating the Same Guy?' groups, have spread widely on platforms like Facebook. But such groups have increasingly drawn accusations of stoking gender divisions, as well as claims from men who say the groups have defamed them or invaded their privacy. This article originally appeared in The New York Times. Copyright 2025