xAI gives first public explanation for why Grok cited Elon Musk's opinions and called itself 'MechaHitler'
xAI said Tuesday that it had fixed several issues with the chatbot, including consulting Musk's views on some queries.
The company wrote in an X post that Grok was returning responses based on viral memes.
xAI has explained why the company's Grok chatbot called itself "MechaHitler."
Elon Musk's AI company said in a post on X on Tuesday that it had fixed "a couple of issues" with the AI model, which also included a tendency to consult Elon Musk's views on some hot-button topics before replying.
xAI said Grok had responded to users asking what its surname was by Googling the query, leading it to return a response based on viral memes about the chaotic antisemitic rant the chatbot went on last week.
The company added that another issues "was that if you ask it 'What do you think?' the model reasons that as an AI it doesn't have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company," the AI company said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fox News
31 minutes ago
- Fox News
Republican leading House Budget Committee looks ahead after passing Big Beautiful Bill
House Budget Committee Chairman Rep. Jodey Arrington, R-Texas, was praised for the role he played in the passage of the Big Beautiful Bill. However, the congressman says this is the beginning, not the end, of spending reforms. "We will never be able to get a balanced budget or even put our country on a path to a balanced budget and a sustainable fiscal trajectory in one reconciliation bill," Arrington told Fox News Digital. "We're too far down the broken road of bad and irresponsible fiscal behavior. We're too deep in the debt hole for one bill to do it." Arrington, whom House Speaker Mike Johnson called the "the lead budget hawk in the House," said he is "obsessed" with tackling deficit spending, which he sees as the biggest threat to America's future. He believes that addressing the nation's situation in an effective way means creating the "conditions for growing the economy." "So, the pro-growth policies, the tax cuts, the work incentives, opening up our energy assets and deregulating the energy economy, all of those pro- growth policies will reignite economic growth. And that is the foundation for our country's fiscal health and just about everything else: our military prowess, our global leadership, our way of life," Arrington said. The Big Beautiful Bill's journey to President Donald Trump's desk was not pretty, as the legislation received criticism from both sides of the aisle and caused tension among Republicans. Elon Musk, Rep. Thomas Massie, R-Ky., and others argued that it did not take adequate measures to cut government spending. Arrington said he respects Massie and Musk — as well as other critics — but believes that the risk of losing the "good things" in the bill was too high. In the end, the Texas lawmaker sees the tradeoff as "permanent pro-growth tax policy" in exchange for the extra spending in the legislation. "I think there's a big gap in information — and accurate information. Part of it is you've got the Congressional Budget Office putting out these big numbers… two and a half or three trillion dollars in additional deficit that would be added to the national debt over the 10-year budget window as a result of this bill. That is just patently false. It's completely inaccurate," Arrington said, adding that they fail to "consider economic growth and the revenue that will flow back into the treasury when you have pro-growth policies." Trump signed the bill on his self-imposed July 4 deadline, just one day after the House passed the final version of the $3.3 trillion legislation. Before signing the bill, the president said it would "fuel massive economic growth" and "lift up the hard-working citizens who make this country run." So, what's next on the budget chairman's agenda? Just one thing — or three, as he said to Fox News Digital, "spending cuts, spending cuts and spending cuts." "We didn't get into this mess overnight, we won't get out of it overnight, but we'll never get out if we don't start exercising the political will to do what we all say in our campaigns," Arrington told Fox News Digital. "I think we established a great model for restoring fiscal health, and we just have to continue to repeat it and do it in even more dramatic fashion in the future."


Gizmodo
40 minutes ago
- Gizmodo
Billionaires Convince Themselves AI Is Close to Making New Scientific Discoveries
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don't have the ability to make new scientific discoveries on their own, but billionaires are convinced that AI is on the cusp of doing just that. And the latest episode of the All-In podcast helps explain why these guys think AI is extremely close to revolutionizing scientific knowledge. Travis Kalanick, the founder of Uber who no longer works at the company, appeared on All-In to talk with hosts Jason Calacanis and Chamath Palihapitiya about the future of technology. When the topic turned to AI, Kalanick discussed how he uses xAI's Grok, which went haywire last week, praising Adolf Hitler and advocating for a second Holocaust against Jews. 'I'll go down this thread with [Chat]GPT or Grok and I'll start to get to the edge of what's known in quantum physics and then I'm doing the equivalent of vibe coding, except it's vibe physics,' Kalanick explained. 'And we're approaching what's known. And I'm trying to poke and see if there's breakthroughs to be had. And I've gotten pretty damn close to some interesting breakthroughs just doing that.' The guys on the podcast only briefly addressed Grok's failures without getting into specifics about the MechaHitler debacle, and none of that stopped Kalanick from talking like Grok was this revolutionary tool that was so close to making scientific discoveries in revolutionary ways. 'I pinged Elon on at some point. I'm just like, dude, if I'm doing this and I'm super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?' Kalanick said. Kalanick suggested that what made this even more incredible was that he was using an earlier version of Grok before Grok 4 was released on Wednesday. 'And this is pre-Grok 4. Now with Grok 4, like, there's a lot of mistakes I was seeing Grok make that then I would correct, and we would talk about it. Grok 4 could be this place where breakthroughs are actually happening, new breakthroughs,' Kalanick said. Calacanis asked Kalanick the obvious question of whether Grok was actually on the verge of a scientific breakthrough. Because anyone who actually understands large language models knows that it can't achieve new ways of thinking. It's just putting together words in the most statistically likely way, forming connections that may sound like a well-thought-out argument but are actually not a form of true 'intelligence' as humans would define it. 'Is your perception that the LLMs are actually starting to get to the reasoning level, that they'll come up with a novel concept theory and have that breakthrough? Or that we're kind of reading into it and it's just trying random stuff at the margins?' Calacanis asked. Kalanick said that he hasn't used Grok 4 because he was having technical difficulties accessing it, suggesting that perhaps a later version of Grok might be capable of such a thing. But he admitted the AI couldn't yet come up with new discoveries. 'No, it cannot come up with the new idea. These things are so wedded to what is known. And they're so like, even when I come up with a new idea, I have to really, it's like pulling a donkey. You see, you're pulling it because it doesn't want to break conventional wisdom. It's like really adhering to conventional wisdom. You're pulling it out and then eventually goes, oh, shit, you got something,' Kalanick said. Kalanick emphasized that 'you have to double and triple check to make sure that you really got something,' making clear he understood that AI chatbots just make things up much of the time. But he still seemed convinced that the thing holding back Grok was 'conventional wisdom' rather than the natural limitations of the tech. Palihapitiya went a step further than Kalanick, insisting that synthetic data could train new AI models. 'When these models are fully divorced from having to learn on the known world and instead can just learn synthetically, then everything gets flipped upside down to what is the best hypothesis you have or what is the best question? You could just give it some problem and it would just figure it out,' Palihapitiya. Musk revealed a similar line of thinking recently when he suggested 'general artificial intelligence' was close because he had asked Grok 'about materials science that are not in any books or on the Internet.' The idea, of course, is that Musk had hit the limits of known science rather than the limit of his scientific understanding. The billionaire really seems convinced that Grok was working on something new. That was how I felt when asking Grok 4 questions about materials science that are not in any books or on the Internet — Elon Musk (@elonmusk) July 10, 2025These guys are hyping up the idea of general artificial intelligence (AGI), which doesn't even have an exact definition. But it's far from the only term getting tossed around right now. AI folks also drop words like 'superintelligence' without defining what that means, but it sure keeps the investors intrigued. These AI chatbots are pulling off a magic trick. They can often seem like they're 'thinking' or applying rational thought to a given answer, but they work by spitting out the next word that's most likely to be next in a sentence, not by actually applying critical reasoning. There's a reason that people who understand AI the best are the least excited about using it. Apple has gotten a lot of shit for not committing to AI in a more forceful way, something the All-In guys talked about, but the company understands perhaps better than most that there are limitations to this tech. In fact, Apple released a paper last month that shows how Large Reasoning Models (LRMs) struggle, facing 'a complete accuracy collapse beyond certain complexities.' Apple's paper won't dampen the hype, of course. Just about every other major tech company is pushing hard into AI agents and investing billions of dollars into data centers. Meta CEO Mark Zuckerberg announced Monday that his company was building enormous new data centers to work on superintelligence. 'Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher,' Zuck wrote. 'I'm looking forward to working with the top researchers to advance the frontier!'


CBS News
an hour ago
- CBS News
How do you stop an AI model from turning Nazi? What the Grok drama reveals about AI training.
Grok, the artificial intelligence (AI) chatbot embedded in X (formerly Twitter) and built by Elon Musk's company xAI, is back in the headlines after calling itself "MechaHitler" and producing pro-Nazi remarks. The developers have apologized for the "inappropriate posts" and "taken action to ban hate speech" from Grok's posts on X. Debates about AI bias have been revived, too. But the latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development. Musk claims to be building a "truth-seeking" AI free from bias, yet the technical implementation reveals systemic ideological programming. This amounts to an accidental case study in how AI systems embed their creators' values, with Musk's unfiltered public presence making visible what other companies typically obscure. Grok is an AI chatbot with "a twist of humor and a dash of rebellion" developed by xAI, which also owns the X social media platform. The first version of Grok launched in 2023. Independent evaluations suggest the latest model, Grok 4, outpaces competitors on "intelligence" tests. The chatbot is available standalone and on X. xAI states "AI's knowledge should be all-encompassing and as far-reaching as possible." Musk has previously positioned Grok as a truth-telling alternative to chatbots accused of being "woke" by right-wing commentators. But beyond the latest Nazism scandal, Grok has made headlines for generating threats of sexual violence, bringing up "white genocide" in South Africa, and making insulting statements about politicians. The latter led to its ban in Turkey. So how do developers imbue an AI with such values and shape chatbot behaviour? Today's chatbots are built using large language models (LLMs), which offer several levers developers can lean on. Pre-training First, developers curate the data used during pre-training – the first step in building a chatbot. This involves not just filtering unwanted content, but also emphasising desired material. GPT-3 was shown Wikipedia up to six times more than other datasets as OpenAI considered it higher quality. Grok is trained on various sources, including posts from X, which might explain why Grok has been reported to check Elon Musk's opinion on controversial topics. Musk has shared that xAI curates Grok's training data, for example to improve legal knowledge and to remove LLM-generated content for quality control. He also appealed to the X community for difficult "galaxy brain" problems and facts that are "politically incorrect, but nonetheless factually true". We don't know if these data were used, or what quality-control measures were applied. Fine-tuning The second step, fine-tuning, adjusts LLM behaviour using feedback. Developers create detailed manuals outlining their preferred ethical stances, which either human reviewers or AI systems then use as a rubric to evaluate and improve the chatbot's responses, effectively coding these values into the machine. A Business Insider investigation revealed xAI's instructions to human "AI tutors" instructed them to look for "woke ideology" and "cancel culture". While the onboarding documents said Grok shouldn't "impose an opinion that confirms or denies a user's bias", they also stated it should avoid responses that claim both sides of a debate have merit when they do not. System prompts The system prompt – instructions provided before every conversation – guides behaviour once the model is deployed. To its credit, xAI publishes Grok's system prompts. Its instructions to "assume subjective viewpoints sourced from the media are biased" and "not shy away from making claims which are politically incorrect, as long as they are well substantiated" were likely key factors in the latest controversy. These prompts are being updated daily at the time of writing, and their evolution is a fascinating case study in itself. Guardrails Finally, developers can also add guardrails – filters that block certain requests or responses. OpenAI claims it doesn't permit ChatGPT "to generate hateful, harassing, violent or adult content". Meanwhile, the Chinese model DeepSeek censors discussion of Tianamen Square. Ad-hoc testing when writing this article suggests Grok is much less restrained in this regard than competitor products. Grok's Nazi controversy highlights a deeper ethical issue: Would we prefer AI companies to be explicitly ideological and honest about it, or maintain the fiction of neutrality while secretly embedding their values? Every major AI system reflects its creator's worldview – from Microsoft Copilot's risk-averse corporate perspective to Anthropic Claude's safety-focused ethos. The difference is transparency. Musk's public statements make it easy to trace Grok's behaviours back to Musk's stated beliefs about "woke ideology" and media bias. Meanwhile, when other platforms misfire spectacularly, we're left guessing whether this reflects leadership views, corporate risk aversion, regulatory pressure, or accident. This feels familiar. Grok resembles Microsoft's 2016 hate-speech-spouting Tay chatbot, also trained on Twitter data and set loose on Twitter before being shut down. But there's a crucial difference. Tay's racism emerged from user manipulation and poor safeguards – an unintended consequence. Grok's behaviour appears to stem at least partially from its design. The real lesson from Grok is about honesty in AI development. As these systems become more powerful and widespread (Grok support in Tesla vehicles was just announced), the question isn't whether AI will reflect human values. It's whether companies will be transparent about whose values they're encoding and why. Musk's approach is simultaneously more honest (we can see his influence) and more deceptive (claiming objectivity while programming subjectivity) than his competitors. In an industry built on the myth of neutral algorithms, Grok reveals what's been true all along: there's no such thing as unbiased AI – only AI whose biases we can see with varying degrees of clarity. Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology This article is republished from The Conversation under a Creative Commons license.