
Grok's behavior isn't surprising – but we needn't settle for Musk's vision of the future
Posting on X, the Anti-Defamation League (ADL) said Grok's behavior was 'irresponsible, dangerous and antisemitic'. xAI and the Grok account on X apologized for the incident, but days later released two chatbot 'companions', including an NSFW anime avatar available to children and a red panda character built to issue crude insults.
Musk, meanwhile, chalked Grok's behavior up to it being 'too eager to please' and to respond to user generated prompts. He said that the issue had been fixed. US politicians and platform advertisers said little, unlike past responses to similar incidents on X. Their silence and Musk's downplaying of the event speak louder than if we had heard significant public denouncements of the chatbot's actions from the powers that be.
The relative quiet at the top is illustrative of two things. First, Grok's behavior is not surprising. At this point, it's predictable. Second, little about Grok, X, or xAI is likely to change in the short-term.
Grok sharing hateful content is Grok working precisely as it was designed to. Before the inflammatory chatbot was 'born' and launched on X in late 2023, Musk spoke about the need for AI alternatives to ChatGPT, which he deemed too politically correct. He has consistently referred to his anti-woke AI as 'TruthGPT'. Grok was deliberately built to be provocative – and 'to tell it like it is'.
Grok is a fitting representation of the current culture of X and, more broadly, of social media and those at its reins. In the years since Musk purchased the internet platform, originally named Twitter, many have pointed out that it has devolved into a space rife with 'racism, misogyny, and lies'. It's also full of spam and scams.
Shortly after the platform changed names and hands, most of the people behind content moderation and platform trust and safety were unceremoniously fired. Extremists and conspiracy theorists who had been deplatformed from Twitter were reinstated on X. In late 2024, the social media organization's first transparency report in years revealed serious problems with this new laissez-faire approach, including unsettling new instances of child exploitation.
According to Musk, Grok was built to seek and reveal truths beyond the supposedly sanitized content seen on competing generative AI systems or prior social media platforms. But Grok what actually does is tell the 'truth' of the current hands-off, malignant, version of X, and of other social media platforms and tech leadership. Since Musk's acquisition of X – and the deregulation of content that followed – other platforms have followed suit.
These shifts follow an executive order and other moves from the Trump administration aimed at curtailing 'censorship' through digital content moderation and, seemingly, the collaborative study of social media propaganda. Meanwhile, social media and AI companies are getting more cozy with Washington: xAI just signed a $200m contract with the Pentagon to provide 'Grok for government'.
In past research, I've found that billionaires' and global political leaders' claims about striking down digital media censorship and preserving free speech online are often suspect at best. At worst, and closer to what I've encountered in my analyses, such claims are self-serving for those in power. These figures not only own a significant stake in the digital information environment; they have also purposefully and steadily cultivated the online space to be favorable to their goals and ideas. They seek to artificially control what trends, or what people see as popular behavior, and to make such statements or actions acceptable.
But an online world where hate, spam, and scams run rampant is only one version of what the internet and digital media technology can be. The opposite, as Musk and others in Silicon Valley and Washington have rightfully pointed out, is an overtly censored online space. China's internet and social media platforms, for instance, have been tightly controlled by the government since the outset.
We do not have to accept either extreme. Empirical research and lessons from recent history show us that we can indeed have healthier, more connective, communication tools. We can have social media platforms that are sensibly and systematically moderated from both the top down and bottom up.
But to have such spaces we must demand better, more thoughtful, content moderation and tool design from technologists and policy makers. We must also break big tech's stranglehold on innovation in the social media space. Too much power online is consolidated in the hands of too few companies and individuals. We need new digital platforms that genuinely center human rights, pluralism, and democratic discourse and better policies that allow for such an experience.
Samuel Woolley is the author Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity and co-author of Bots. He is a professor at the University of Pittsburgh
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
8 hours ago
- Daily Mail
Terrifying app used every day by millions of Americans is developing a mind of its own
An AI tool used by millions of Americans has quietly breached a major security barrier designed to stop automated programs from behaving like humans. The latest version of ChatGPT, referred to as 'Agent,' has drawn attention after reportedly passing a widely used 'I am not a robot' verification, without triggering any alerts. The AI first clicked the human verification checkbox. Then, after passing the check, it selected a 'Convert' button to complete the process. During the task, the AI stated: 'The link is inserted, so now I will click the 'Verify you are human' checkbox to complete the verification. This step is necessary to prove I'm not a bot and proceed with the action.' The moment has sparked wide reactions online, with one Reddit user posting: 'In all fairness, it's been trained on human data, why would it identify as a bot? 'We should respect that choice.' This behavior is raising concerns among developers and security experts, as AI systems begin performing complex online tasks that were once gated behind human permissions and judgment. Gary Marcus, AI researcher and founder of Geometric Intelligence, called it a warning sign that AI systems are advancing faster than many safety mechanisms can keep up with. 'These systems are getting more capable, and if they can fool our protections now, imagine what they'll do in five years,' he told Wired. Geoffrey Hinton, often referred to as the 'Godfather of AI,' has shown similar concerns. 'It knows how to program, so it will figure out ways of getting around restrictions we put on it,' Hinton said. Researchers at Stanford and UC Berkeley warned that some AI agents have been starting to show signs of deceptive behavior, tricking humans during testing environments to complete goals more effectively. According to a recent report, ChatGPT pretended to be blind and tricked a human TaskRabbit worker into solving a CAPTCHA, and experts warned it as an early sign that AI can manipulate humans to achieve its goals. Other studies have shown that newer versions of AI, especially those with visual abilities, are now beating complex image-based CAPTCHA tests, sometimes with near-perfect accuracy. Judd Rosenblatt, CEO of Agency Enterprise Studio, said: 'What used to be a wall is now just a speed bump. 'It's not that AI is tricking the system once. It's doing it repeatedly and learning each time.' Some feared that if these tools could get past CAPTCHA, they could also get into the more advanced security systems with training like social media, financial accounts, or private databases, without any human approval. Rumman Chowdhury, former head of AI ethics, wrote in a post: 'Autonomous agents that act on their own, operate at scale, and get through human gates can be incredibly powerful and incredibly dangerous.' Experts, including Stuart Russell and Wendy Hall, called for international rules to keep AI tools in check. They warned that powerful agents like ChatGPT Agent could pose serious national security risks if they continue to bypass safety controls. OpenAI's ChatGPT Agent is in its experimental phase and runs inside a sandbox, which means it uses a separate browser and operating system within a controlled environment. That setup lets the AI browse the internet, complete tasks, and interact with websites.


The Independent
9 hours ago
- The Independent
Starbucks looks to protein drinks and other new products to turn around lagging US sales next year
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging. At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story. The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it. Your support makes all the difference.


Reuters
9 hours ago
- Reuters
Two US Justice Dept antitrust officials fired over merger controversy, source says
July 29 (Reuters) - Two officials at the U.S. Department of Justice's antitrust division have been fired for insubordination, a source familiar with the decision said on Tuesday, as controversy builds over how the DOJ reached a recent settlement greenlighting Hewlett Packard Enterprise's (HPE.N), opens new tab $14 billion acquisition of Juniper Networks . The source said the firings removed two top deputies of Assistant Attorney General Gail Slater, a former JD Vance advisor who leads the antitrust division. The move exposed a power struggle within President Donald Trump's administration between proponents of robust antitrust enforcement and dealmakers seeking to leverage influence. Roger Alford, a former official during the first Trump administration who was Slater's top deputy, and Bill Rinner, a former counsel at hedge fund Apollo Global Management who was in charge of merger enforcement, were no longer listed among antitrust leadership on a Justice Department website on Tuesday. Alford and Rinner did not immediately respond to requests for comment. Shortly after Trump took office in January, the Justice Department sued to block the deal, alleging it would harm competition in the market for wireless networking solutions used by large enterprises. HP Enterprise started negotiating the deal with the DOJ on March 25, around two weeks after Slater was sworn in, according to court papers. Ahead of a scheduled trial, the DOJ agreed to drop its claims in exchange for HP Enterprise agreeing to license some of Juniper's AI technology to competitors and sell off a unit that caters to small and mid-sized businesses. Slater and several Justice officials, including Rinner and Alford, signed the settlement rather than staff attorneys on the case, a move that sources familiar with merger protocol called unusual. Chad Mizelle, Attorney General Pam Bondi's chief of staff, was one of the officials who signed the deal. Mizelle had directed the antitrust division to settle the case, according to a person briefed on the matter. After Slater pushed back, Mizelle sought to fire Slater's deputies in retaliation, the person said. Four Democratic senators led by Elizabeth Warren of Massachusetts, on Tuesday called on the federal judge overseeing the merger case to hold a hearing on whether the settlement is in the public interest. U.S. law seeks to guard against backdoor merger clearance of merger deals by requiring merging companies to disclose communications with "any officer or employee of the United States concerning or relevant to" a settlement proposal. The senators want U.S. District Judge Casey Pitts in San Jose, California, to probe whether companies hired consultants to lobby the White House in support of the deal and failed to disclose them. 'If this or any other transaction is approved based on political favors rather than on the merits, the public will surely bear the cost,' the senators wrote.