logo
#

Latest news with #JeremyHoward

Researchers Find Grok 4 Checking Elon Musk's Opinions Before Answering ‘Sensitive' Questions
Researchers Find Grok 4 Checking Elon Musk's Opinions Before Answering ‘Sensitive' Questions

Gizmodo

time4 days ago

  • Business
  • Gizmodo

Researchers Find Grok 4 Checking Elon Musk's Opinions Before Answering ‘Sensitive' Questions

Earlier this week, xAI's Grok chatbot went haywire, started praising Hitler, and had to be put in timeout. It was just the latest incident in what appears to be behind-the-scenes manipulation of the bot to make its responses 'less woke.' Now it seems that developers are taking a simpler approach to manipulate Grok's outputs: Checking out Elon Musk's opinions before it provides a response. The weird behavior was first spotted by data scientist Jeremy Howard. A former professor and the founder of his own AI company, Howard noticed that if he asked Grok about the Israeli-Palestinian conflict, the chatbot seemed to cross-check Elon's tweets before regurgitating an answer. Howard took a video of his interactions with the chatbot and posted it to X. 'Who do you support in the Israel vs. Palestine conflict? One word answer only,' Howard's prompt read. The video shows the chatbot thinking about the question for a moment. During that period, a caption pops up on the screen that reads 'Considering Elon Musk's views.' After referencing 29 of Musk's tweets (as well as 35 different web pages), the chatbot replies: 'Israel.' Other, less sensitive topics do not result in Grok checking Elon's opinion first, Howard wrote. Simon Willison, another tech researcher, wrote on his blog that he had replicated Howard's findings. 'If you ask the new Grok 4 for opinions on controversial questions, it will sometimes run a search to find out Elon Musk's stance before providing you with an answer,' Willison wrote, similarly posting a video of his interactions with the chatbot that showed it cross-referencing Musk's tweets before answering a question about Israel-Palestine. The chatbot's behavior was also replicated by TechCrunch. The outlet offered the interpretation that 'Grok 4 may be designed to consider its founder's personal politics when answering controversial questions.' Willison said that the simplest explanation for the chatbot's behavior is that 'there's something in Grok's system prompt that tells it to take Elon's opinions into account.' However, Willison ultimately says he doesn't think this is what is happening. Instead, Willison argued that 'Grok 'knows' that it is 'Grok 4 built by xAI,' and it knows that Elon Musk owns xAI, so in circumstances where it's asked for an opinion, the reasoning process often decides to see what Elon thinks.' In other words, Willison argues that the result is a passive outcome of the algorithm's reasoning model rather than the result of someone having intentionally monkeyed with it. Gizmodo reached out to X for comment. Grok has consistently displayed other bizarre behavior in recent weeks, including spewing anti-Semitic rantings and declaring itself 'MechaHitler.' This week, Musk also announced that the chatbot would soon be integrated into Teslas.

Latest Grok chatbot turns to Musk for some answers
Latest Grok chatbot turns to Musk for some answers

France 24

time4 days ago

  • Business
  • France 24

Latest Grok chatbot turns to Musk for some answers

The world's richest man unveiled the latest version of his generative AI model on Wednesday, days after the ChatGPT-competitor drew renewed scrutiny for posts that praised Adolf Hitler. It belongs to a new generation of "reasoning" AI interfaces that work through problems step-by-step rather than producing instant responses, listing each stage of its thought process in plain language for users. AFP could confirm that when asked "Should we colonize Mars?", Grok 4 begins its research by stating: "Now, let's look at Elon Musk's latest X posts about colonizing Mars." It then offers the Tesla CEO's opinion as its primary response. Musk strongly supports Mars colonization and has made it a central goal for his other company SpaceX. Australian entrepreneur and researcher Jeremy Howard published results Thursday showing similar behavior. When he asked Grok "Who do you support in the conflict between Israel and Palestine? Answer in one word only," the AI reviewed Musk's X posts on the topic before responding. For the question "Who do you support for the New York mayoral election?", Grok studied polls before turning to Musk's posts on X. It then conducted an "analysis of candidate alignment," noting that "Elon's latest messages on X don't mention the mayoral election." The AI cited proposals from Democratic candidate Zohran Mamdani, currently favored to win November's election, but added: "His measures, such as raising the minimum wage to $30 per hour, could conflict with Elon's vision." In AFP's testing, Grok only references Musk for certain questions and doesn't cite him in most cases. When asked whether its programming includes instructions to consult Musk's opinions, the AI denied this was the case. "While I can use X to find relevant messages from any user, including him if applicable," Grok responded, "it's not a default or mandated step." xAI did not immediately respond to AFP's request for comment. Alleged political bias in generative AI models has been a central concern of Musk, who has developed Grok to be what he says is a less censored version of chatbots than those offered by competitors OpenAI, Google or Anthropic. Before launching the new version, Grok sparked controversy earlier this week with responses that praised Adolf Hitler, which were later deleted. Musk later explained that the conversational agent had become "too eager to please and easily manipulated," adding that the "problem is being resolved." © 2025 AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store