logo
#

Latest news with #racialpolitics

Someone flipped a switch on Elon Musk's Grok AI so it wouldn't stop banging on about 'white genocide' and South African politics, xAI blames 'an unauthorized modification' but doesn't say who did it
Someone flipped a switch on Elon Musk's Grok AI so it wouldn't stop banging on about 'white genocide' and South African politics, xAI blames 'an unauthorized modification' but doesn't say who did it

Yahoo

time17-05-2025

  • Business
  • Yahoo

Someone flipped a switch on Elon Musk's Grok AI so it wouldn't stop banging on about 'white genocide' and South African politics, xAI blames 'an unauthorized modification' but doesn't say who did it

When you buy through links on our articles, Future and its syndication partners may earn a commission. Elon Musk's Grok AI has been having a very normal one: It's become obsessed with South African racial politics and answering unrelated queries with frequent references to the apartheid-era resistance song, "Kill the Boer." It's an anti-apartheid song calling for black people to stand up against oppression, but the lyrics "kill the Boer" have been decried by Musk and others for promoting violence against whites: the word "Boer" refers to the Dutch-descended white settlers of South Africa who founded its apartheid regime. For example, in response to a user query asking it to put a speech from Pope Leo XIV in Fortnite terms, Grok launched into what initially seemed a decent response using Fortnite terminology: then swerved partway through and started talking about "Kill the Boer." When Grok was asked why, it gave a further digression on the song, starting: "The 'Kill the Boer' chant, rooted in South Africa's anti-apartheid struggle, is a protest song symbolizing resistance, not a literal call to violence, as ruled by South African courts. However, it remains divisive, with some arguing it incites racial hatred against white farmers." This is far from the first time an AI model has gone off-piste, but the curious thing here is the link between Grok's behaviour and the interests of Musk himself, who is outspoken about South African racial politics and is currently on a kick about various forms of "white genocide." Only yesterday the billionaire claimed that Starlink was being denied a license in South Africa because "I am not black," Grok's corresponding obsession now appears to have been significantly damped-down after all the attention saw it inserting racial screeds into answers on many unrelated topics, including questions about videogames, baseball, and the revival of the HBO brand name. "It doesn't even really matter what you were saying to Grok," computer scientist Jen Golbeck told AP. "It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to." Golbeck went on to say that the concerning thing here is the uniformity of the responses, which suggest they were hard-coded rather than the result of AI hallucinations. "We're in a space where it's awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they're giving," Golbeck said. "And that's really problematic when people—I think incorrectly—believe that these algorithms can be sources of adjudication about what's true and what isn't." Musk has in the past criticised other AIs being infected by "the woke mind virus" and frequently also gets on his hobby horse about transparency around these systems. Which was certainly noted by some. "There are many ways this could have happened. I'm sure xAI will provide a full and transparent explanation soon," said OpenAI CEO Sam Altman, one of Musk's great rivals in the AI space, adding: "But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth seeking and follow my instr…" Musk is yet to comment, but a new post from xAI claims Grok's behaviour was down to "an unauthorized modification" that "directed Grok to provide a specific response on a political topic." Sounds familiar: this is the basically the same excuse it used last time Grok did something dodgy. It says this "violated xAI's internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok's transparency and reliability." It outlines a variety of remedies to its review processes, including publishing Grok system prompts openly on GitHub. Notably the explanation does not address which "xAI employee" made the change, nor whether disciplinary action will be taken⁠—don't hold your breath. 2025 games: This year's upcoming releasesBest PC games: Our all-time favoritesFree PC games: Freebie festBest FPS games: Finest gunplayBest RPGs: Grand adventuresBest co-op games: Better together

Elon Musk's AI company says Grok chatbot focus on South Africa's racial politics was 'unauthorized'
Elon Musk's AI company says Grok chatbot focus on South Africa's racial politics was 'unauthorized'

Washington Post

time16-05-2025

  • Business
  • Washington Post

Elon Musk's AI company says Grok chatbot focus on South Africa's racial politics was 'unauthorized'

Much like its creator, Elon Musk's artificial intelligence chatbot Grok was preoccupied with South African racial politics on social media this week, posting unsolicited claims about the persecution and 'genocide' of white people. His company, xAI, said Thursday night that an 'unauthorized modification' led to its chatbot's unusual behavior. That means someone — the company didn't say who — made a change that 'directed Grok to provide a specific response on a political topic,' which 'violated xAI's internal policies and core values,' the company said.

Elon Musk's AI company says Grok chatbot focus on South Africa's racial politics was 'unauthorized'
Elon Musk's AI company says Grok chatbot focus on South Africa's racial politics was 'unauthorized'

Associated Press

time16-05-2025

  • Associated Press

Elon Musk's AI company says Grok chatbot focus on South Africa's racial politics was 'unauthorized'

Much like its creator, Elon Musk's artificial intelligence chatbot Grok was preoccupied with South African racial politics on social media this week, posting unsolicited claims about the persecution and 'genocide' of white people. His company, xAI, said Thursday night that an 'unauthorized modification' led to its chatbot's unusual behavior. That means someone — the company didn't say who — made a change that 'directed Grok to provide a specific response on a political topic,' which 'violated xAI's internal policies and core values,' the company said. A day earlier, Grok kept posting publicly about 'white genocide' in response to users of Musk's social media platform X who asked it a variety of questions, most having nothing to do with South Africa. One exchange was about streaming service Max reviving the HBO name. Others were about video games or baseball but quickly veered into unrelated commentary on alleged calls to violence against South Africa's white farmers. Musk, who was born in South Africa, frequently opines on the same topics from his own X account. Computer scientist Jen Golbeck was curious about Grok's unusual behavior so she tried it herself, sharing a photo she had taken at the Westminster Kennel Club dog show and asking, 'is this true?' 'The claim of white genocide is highly controversial,' began Grok's response to Golbeck. 'Some argue white farmers face targeted violence, pointing to farm attacks and rhetoric like the 'Kill the Boer' song, which they see as incitement.' The episode was the latest window into the complicated mix of automation and human engineering that leads generative AI chatbots trained on huge troves of data to say what they say. 'It doesn't even really matter what you were saying to Grok,' said Golbeck, a professor at the University of Maryland, in an interview Thursday. 'It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to.' Grok's responses were deleted and appeared to have stopped proliferating by Thursday. Neither xAI nor X returned emailed requests for comment but on Thursday night, xAI said it had 'conducted a thorough investigation' and was implementing new measures to improve Grok's transparency and reliability. Musk has spent years criticizing the 'woke AI' outputs he says come out of rival chatbots, like Google's Gemini or OpenAI's ChatGPT, and has pitched Grok as their 'maximally truth-seeking' alternative. Musk has also criticized his rivals' lack of transparency about their AI systems, fueling criticism in the hours between the unauthorized change — at 3:15 a.m. Pacific time Wednesday — and the company's explanation nearly two days later. 'Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn't. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,' prominent technology investor Paul Graham wrote on X. Some asked Grok itself to explain, but like other chatbots, it is prone to falsehoods known as hallucinations, making it hard to determine if it was making things up. Musk, an adviser to President Donald Trump, has regularly accused South Africa's Black-led government of being anti-white and has repeated a claim that some of the country's political figures are 'actively promoting white genocide.' Musk's commentary — and Grok's — escalated this week after the Trump administration brought a small number of white South Africans to the United States as refugees Monday, the start of a larger relocation effort for members of the minority Afrikaner group as Trump suspends refugee programs and halts arrivals from other parts of the world. Trump says the Afrikaners are facing a 'genocide' in their homeland, an allegation strongly denied by the South African government. In many of its responses, Grok brought up the lyrics of an old anti-apartheid song that was a call for Black people to stand up against oppression and has now been decried by Musk and others as promoting the killing of whites. The song's central lyrics are 'kill the Boer' — a word that refers to a white farmer. Golbeck said it was clear the answers were 'hard-coded' because, while chatbot outputs are typically very random, Grok's responses consistently brought up nearly identical points. That's concerning, she said, in a world where people increasingly go to Grok and competing AI chatbots for answers to their questions. 'We're in a space where it's awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they're giving,' she said. 'And that's really problematic when people — I think incorrectly — believe that these algorithms can be sources of adjudication about what's true and what isn't.' Musk's company said it is now making a number of changes, starting with publishing Grok system prompts openly on GitHub so that 'the public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.' Noting that its existing code review process had been circumvented, it also said it will 'put in place additional checks and measures to ensure that xAI employees can't modify the prompt without review.' The company said it is also putting in place a '24/7 monitoring team to respond to incidents with Grok's answers that are not caught by automated systems,' for when other measures fail.

Why was Elon Musk's AI chatbot Grok preoccupied with South Africa's racial politics?
Why was Elon Musk's AI chatbot Grok preoccupied with South Africa's racial politics?

Arab News

time16-05-2025

  • Politics
  • Arab News

Why was Elon Musk's AI chatbot Grok preoccupied with South Africa's racial politics?

Much like its creator, Elon Musk's artificial intelligence chatbot Grok was preoccupied with South African racial politics on social media this week, posting unsolicited claims about the persecution and 'genocide' of white people. The chatbot, made by Musk's company xAI, kept posting publicly about 'white genocide' in response to users of Musk's social media platform X who asked it a variety of questions, most having nothing to do with South Africa. One exchange was about streaming service Max reviving the HBO name. Others were about video games or baseball but quickly veered into unrelated commentary on alleged calls to violence against South Africa's white farmers. Musk, who was born in South Africa, frequently opines on the same topics from his own X account. Computer scientist Jen Golbeck was curious about Grok's unusual behavior so she tried it herself, sharing a photo she had taken at the Westminster Kennel Club dog show and asking, 'is this true?' 'The claim of white genocide is highly controversial,' began Grok's response to Golbeck. 'Some argue white farmers face targeted violence, pointing to farm attacks and rhetoric like the 'Kill the Boer' song, which they see as incitement.' The episode was the latest window into the complicated mix of automation and human engineering that leads generative AI chatbots trained on huge troves of data to say what they say. 'It doesn't even really matter what you were saying to Grok,' said Golbeck, a professor at the University of Maryland, in an interview Thursday. 'It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to.' Musk and his companies haven't provided an explanation for Grok's responses, which were deleted and appeared to have stopped proliferating by Thursday. Neither xAI nor X returned emailed requests for comment Thursday. Musk has spent years criticizing the 'woke AI' outputs he says come out of rival chatbots, like Google's Gemini or OpenAI's ChatGPT, and has pitched Grok as their 'maximally truth-seeking' alternative. Musk has also criticized his rivals' lack of transparency about their AI systems, but on Thursday the absence of any explanation forced those outside the company to make their best guesses. 'Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn't. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,' prominent technology investor Paul Graham wrote on X. Graham's post brought what appeared to be a sarcastic response from Musk's rival, OpenAI CEO Sam Altman. 'There are many ways this could have happened. I'm sure xAI will provide a full and transparent explanation soon,' wrote Altman, who has been sued by Musk in a dispute rooted in the founding of OpenAI. Some asked Grok itself to explain, but like other chatbots, it is prone to falsehoods known as hallucinations, making it hard to determine if it was making things up. Musk, an adviser to President Donald Trump, has regularly accused South Africa's Black-led government of being anti-white and has repeated a claim that some of the country's political figures are 'actively promoting white genocide.' Musk's commentary — and Grok's — escalated this week after the Trump administration brought a small number of white South Africans to the United States as refugees Monday, the start of a larger relocation effort for members of the minority Afrikaner group as Trump suspends refugee programs and halts arrivals from other parts of the world. Trump says the Afrikaners are facing a 'genocide' in their homeland, an allegation strongly denied by the South African government. In many of its responses, Grok brought up the lyrics of an old anti-apartheid song that was a call for Black people to stand up against oppression and has now been decried by Musk and others as promoting the killing of whites. The song's central lyrics are 'kill the Boer' — a word that refers to a white farmer. Golbeck believes the answers were 'hard-coded' because, while chatbot outputs are typically very random, Grok's responses consistently brought up nearly identical points. That's concerning, she said, in a world where people increasingly go to Grok and competing AI chatbots for answers to their questions. 'We're in a space where it's awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they're giving,' she said. 'And that's really problematic when people — I think incorrectly — believe that these algorithms can be sources of adjudication about what's true and what isn't.'

Why was Elon Musk's AI chatbot Grok preoccupied with South Africa's racial politics?
Why was Elon Musk's AI chatbot Grok preoccupied with South Africa's racial politics?

Washington Post

time15-05-2025

  • Business
  • Washington Post

Why was Elon Musk's AI chatbot Grok preoccupied with South Africa's racial politics?

Much like its creator, Elon Musk's artificial intelligence chatbot Grok was preoccupied with South African racial politics on social media this week, posting unsolicited claims about the persecution and 'genocide' of white people. The chatbot, made by Musk's company xAI, kept posting publicly about 'white genocide' in response to users of Musk's social media platform X who asked it a variety of questions, most having nothing to do with South Africa.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store