
Grok's ‘white genocide' meltdown nods to the real dangers of the AI arms race
It's been a full year since Google's AI overview tool went viral for encouraging people to eat glue and put rocks on pizza. At the time, the mood around the coverage seemed to be: Oh, that silly AI is just hallucinating again.
A year later, AI engineers have solved hallucination problems and brought the world closer to their utopian vision of a society whose rough edges are being smoothed out by advances in machine learning as humans across the planet are brought together to…
Just kidding. It's much worse now.
The problems posed by large language models are as obvious as they were last year, and the year before that, and the year before that. But product designers, backed by aggressive investors, have been busy finding new ways to shove the technology into more spheres of our online experience, so we're finding all kinds of new pressure points — and rarely are they as fun or silly as Google's rocks-on-pizza glitch.
Take Grok, the xAI model that is becoming almost as conspiracy-theory-addled as its creator, Elon Musk.
The bot last week devolved into a compulsive South African 'white genocide' conspiracy theorist, injecting a tirade about violence against Afrikaners into unrelated conversations, like a roommate who just took up CrossFit or an uncle wondering if you've heard the good word about Bitcoin.
XAI blamed Grok's unwanted rants on an unnamed 'rogue employee' tinkering with Grok's code in the extremely early morning hours. (As an aside in what is surely an unrelated matter, Musk was born and raised in South Africa and has argued that 'white genocide' was committed in the nation — it wasn't.)
Grok also cast doubt on the Department of Justice's conclusion that ruled Jeffrey Epstein's death a suicide by hanging, saying that the 'official reports lack transparency.' The Musk bot also dabbled in Holocaust denial last week, as Rolling Stone's Miles Klee reports. Grok said on X that it was 'skeptical' of the consensus estimate among historians that 6 million Jews were murdered by the Nazis because 'numbers can be manipulated for political narratives.'
Manipulated, you say? What, so someone with bad intentions could input their own views into a data set in order to advance a false narrative? Gee, Grok, that does seem like a real risk. (The irony here is that Musk, no fan of traditional media, has gone and made a machine that does the exact kind of bias-amplification and agenda-pushing he accuses journalists of doing.)
The Grok meltdown underscores some of the fundamental problems at the heart of AI development that tech companies have so far yada-yada-yada'd through anytime they're pressed on questions of safety. (Last week, CNBC published a report citing more than a dozen AI professionals who say the industry has already moved on from the research and safety-testing phases and are dead-set on pushing more AI products to market as soon as possible.)
Let's forget, for a moment, that so far every forced attempt to put AI chatbots into our existing tech has been a disaster, because even the baseline use cases for the tech are either very dull (like having a bot summarize your text messages, poorly) or extremely unreliable (like having a bot summarize your text messages, poorly).
First, there's the 'garbage in, garbage out' issue that skeptics have long warned about. Large language models like Grok and ChatGPT are trained on data vacuumed up indiscriminately from across the internet, with all its flaws and messy humanity baked in.
That's a problem because even when nice-seeming CEOs go on TV and tell you that their products are just trying to help humanity flourish, they're ignoring the fact that their products tend to amplify the biases of the engineers and designers that made them, and there are no internal mechanisms baked into the products to make sure they serve users, rather than their masters. (Human bias is a well-known problem that journalists have spent decades protecting against in news by building transparent processes around editing and fact-checking.)
But what happens when a bot is made without the best of intentions? What if someone whats to build a bot to promote a religious or political ideology, and that someone is more sophisticated than whoever that 'rogue employee' was who got under the hood at xAI last week?
'Sooner or later, powerful people are going to use LLMs to shape your ideas,' AI researcher Gary Marcus wrote in a Substack post about Grok last week. 'Should we be worried? Hell, yeah.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
38 minutes ago
- Yahoo
Tesla shareholders face staggering new hurdle after company enacts controversial policy: 'A formidable barrier'
Tesla raised the bar for shareholders to sue the company board or executives for breach of fiduciary duties. The change took effect May 15 and requires an investor or group of investors to hold 3% of the electric vehicle maker's stock "to institute or maintain a derivative proceeding," CNBC reported. Tesla's market cap is $1.123 trillion, so a plaintiff would have to own shares worth $33.7 billion. "Obviously, for a company of Tesla's size, that would be a formidable barrier to anyone bringing a lawsuit for breach of fiduciary duty," Tulane Law School's Ann Lipton told CNBC in an email. The change was enabled by a Texas law that "allows corporations to limit shareholder lawsuits against insiders for breach of fiduciary duty," the outlet added. With shareholder approval, Tesla moved its incorporation site from Delaware to the Lone Star State in June 2024. An investor who owned nine shares of Tesla stock sued the company in 2018, and CEO Elon Musk's $56 billion compensation package was revoked in January 2024. Musk is by far the richest person on the planet, and his wealth makes him nearly untouchable. He helped to pioneer the EV movement by becoming an early investor in Tesla in 2003, and the company has been known for innovative technology and industry-leading breakthroughs. Recently, however, the South African has drawn criticism for straying into American and European politics, including spending lavishly on the U.S. presidential election campaign of Donald Trump, leading government spending cuts as the head of the U.S. Department of Government Efficiency, and supporting the far-right Alternative for Germany party. Activists have protested these actions, and Tesla charging stations, vehicles, and dealerships have been vandalized. Sales have plummeted, and Tesla stock spiraled downward, too, though it has regained much of its value. This upheaval and the larger perception change of Musk from groundbreaker to villain could stifle the uptake of EVs, which is one of the many things necessary to slow the rapid rise of global temperatures caused by the burning of fossil fuels for energy. It would take a massive coalition of shareholders to fight back against this move by Tesla, though companies are generally amenable to public pressure — especially when it comes to consumers' spending power. Tesla, for example, is shifting its focus from EV manufacturing to a robotaxi service and robotics to stabilize its future. Musk has a history of not delivering on outlandish promises, but it has not significantly slowed the company or deterred its supporters. What do you think of Tesla and Elon Musk? Elon is the man Love the company; hate the CEO I'm not a fan of either I don't have an opinion Click your choice to see results and speak your mind. Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Tom's Guide
an hour ago
- Tom's Guide
I ignored this ChatGPT setting for months — now I use it every day
Like millions of others, I use ChatGPT daily. As a power user, I frequently use ChatGPT to summarize research, create images and I even use the bot to talk me down from a spiral of recently, I revisited an underused setting buried in the app that completely changes how the bot responds. It's so useful that I really wish I had taken advantage of it sooner. No, it's not a secret plugin or a pro-only feature. It's something that's been there the whole time: custom instructions. You've probably seen the button dozens of times but never bothered to click on it. It lives quietly in your settings menu under the heading: 'Customize ChatGPT.' This customizes the chatbot's behavior. Using customized GPTs dramatically improved how I interact with the AI by tailoring the chatbot to my specific of it as crafting perfect assistants for certain tasks. Instead of re-explaining your preferences every time, a custom GPT can be set up to understand your job, writing style, tone and even the kind of responses you want (short, long, complex, simple).Custom GPTs also support powerful tools and integrations. You can grant them access to a code interpreter, web browsing, image generation or even custom APIs and uploaded this feature turns ChatGPT into an even better assistant capable of analyzing data, generating visuals or referencing your documents without extra work on your part. Plus, you can set behavioral instructions so the GPTs always respond in your preferred tone or format, saving you time and improving consistency. For me, the hardest part about using a customized GPT is literally remembering to use it. Although it's just a click away, sometimes I'll dive into a prompt before I remember there's a better way to get the best you've built a custom GPT that works well, you can reuse it as often as you like or share it with others (a great asset for teams). Whether you're managing SEO, writing emails or brainstorming ideas, having a GPT fine-tuned to your process means faster, smarter output. It's a useful way to turn a general-purpose tool into a personal or professional super-assistant. If you're like me, you'll notice that when you finally start customizing GPTS, your experience with AI will shift entirely, and for the better. Your responses will feel clearer and far more personal. You won't get generic responses, but answers that fit your style and suggestions that are more engaging. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. You can access this feature in just a few taps: Click your name (or the three dots in the bottom left) Tap Settings Select Custom Instructions You'll see two key fields: 'What would you like ChatGPT to know about you to provide better responses?'(Example: 'I'm a busy mom of three and want an empathetic, conversational tone that feels like I'm chatting with a friend.' 'How would you like ChatGPT to respond?'(Example: 'Use short paragraphs, avoid buzzwords, and give practical suggestions. Add a human tone, like you're texting a smart coworker.') Once you fill these out, that context is baked into every conversation. You don't have to reintroduce yourself or explain your tone again. ChatGPT just gets it. Custom instructions are convenient because they are like having a hat trick in your back pocket. Whatever issue you were having with ChatGPT earlier, such as answers feeling too generic or formal, this setting fixes turns the chatbot into something much closer to a real assistant and one that actually understands everything about you (well, as much as you feel comfortably telling it). It also means you'll spend less time rewriting responses and more time getting useful results. For example: When I asked it to write a note to the babysitter, it used formatting and tone I'd normally have to adjust. When I needed a list of birthday party locations in the area, it knew where I lived and pulled them up immediately. (This type of ultra personalization might not be for everyone, but I find it to be a time saver). And when I asked for snack ideas for the soccer team, the list actually sounded like something I'd submit (and easy enough for a busy mom to contribute); not something from a generic listicle generator. Best of all? It feels more personal without sacrificing quality responses. If you're using ChatGPT with memory enabled, custom instructions are the perfect complement. Memory helps the chatbot remember ongoing preferences and facts across conversations, while custom instructions give it a solid starting point for every new chat. Even if memory isn't your thing, these static instructions make ChatGPT far more efficient right out of the gate. This one setting changes how well ChatGPT can work for you. If you've been using the chatbot like a search engine or idea machine, custom instructions push it into a new category making it more like a personal AI assistant. You'll notice a difference when it starts answering like someone who knows your voice, your goals and how you think. Do you use custom GPTs? Let me know in the comments!
Yahoo
an hour ago
- Yahoo
Meta in talks for Scale AI investment that could top $10 billion, Bloomberg News reports
(Reuters) -Meta Platforms is in talks to make an investment that could exceed $10 billion in artificial intelligence startup Scale AI, Bloomberg News reported on Sunday. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data