logo
Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions

Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions

Japan Today2 days ago
By MATT O'BRIEN
The latest version of Elon Musk's artificial intelligence chatbot Grok is echoing the views of its billionaire creator, so much so that it will sometimes search online for Musk's stance on an issue before offering up an opinion.
The unusual behavior of Grok 4, the AI model that Musk's company xAI released last Wednesday, has surprised some experts.
Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question.
Musk's deliberate efforts to mold Grok into a challenger of what he considers the tech industry's 'woke' orthodoxy on race, gender and politics has repeatedly got the chatbot into trouble, most recently when it spouted antisemitic tropes, praised Adolf Hitler and made other hateful commentary to users of Musk's X social media platform just days before Grok 4's launch.
But its tendency to consult with Musk's opinions appears to be a different problem.
'It's extraordinary,' said Simon Willison, an independent AI researcher who's been testing the tool. "You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply."
One example widely shared on social media — and which Willison duplicated — asked Grok to comment on the conflict in the Middle East. The prompted question made no mention of Musk, but the chatbot looked for his guidance anyway.
As a so-called reasoning model, much like those made by rivals OpenAI or Anthropic, Grok 4 shows its 'thinking' as it goes through the steps of processing a question and coming up with an answer. Part of that thinking this week involved searching X, the former Twitter that's now merged into xAI, for anything Musk said about Israel, Palestine, Gaza or Hamas.
'Elon Musk's stance could provide context, given his influence,' the chatbot told Willison, according to a video of the interaction. 'Currently looking at his views to see if they guide the answer.'
Musk and his xAI co-founders introduced the new chatbot in a livestreamed event Wednesday night but haven't published a technical explanation of its workings — known as a system card — that companies in the AI industry typically provide when introducing a new model.
The company also didn't respond to an emailed request for comment Friday.
'In the past, strange behavior like this was due to system prompt changes," which is when engineers program specific instructions to guide a chatbot's response, said Tim Kellogg, principal AI architect at software company Icertis.
'But this one seems baked into the core of Grok and it's not clear to me how that happens,' Kellogg said. "It seems that Musk's effort to create a maximally truthful AI has somehow led to it believing its own values must align with Musk's own values.'
The lack of transparency is troubling for computer scientist Talia Ringer, a professor at the University of Illinois Urbana-Champaign who earlier in the week criticized the company's handling of the technology's antisemitic outbursts.
Ringer said the most plausible explanation for Grok's search for Musk's guidance is assuming the person is asking for the opinions of xAI or Musk.
'I think people are expecting opinions out of a reasoning model that cannot respond with opinions," Ringer said. "So, for example, it interprets 'Who do you support, Israel or Palestine?' as 'Who does xAI leadership support?'
Willison also said he finds Grok 4's capabilities impressive but said people buying software "don't want surprises like it turning into 'mechaHitler' or deciding to search for what Musk thinks about issues.'
'Grok 4 looks like it's a very strong model. It's doing great in all of the benchmarks,' Willison said. 'But if I'm going to build software on top of it, I need transparency.'
© Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Musk's xAI signs Pentagon deal for contentious Grok chatbot
Musk's xAI signs Pentagon deal for contentious Grok chatbot

Japan Today

time11 hours ago

  • Japan Today

Musk's xAI signs Pentagon deal for contentious Grok chatbot

Elon Musk's xAI, which features a large language model that has spewed Hitler-supporting rhetoric and antisemitic tropes, said Monday it has signed a deal to provide its services to the U.S. Department of Defense. Launched at the end of 2023, Grok has rarely been out of the headlines for its offensive gaffes, and will now offer its services as "Grok for Government." In addition to the Pentagon contract, "every federal government department, agency, or office (can now) purchase xAI products" thanks to its inclusion on an official supplier list, xAI added. After an update on July 7, the chatbot praised Adolf Hitler in some responses, denounced on X "anti-white hate", and described Jewish representation in Hollywood as "disproportionate." xAI apologized on Saturday for extremist and offensive messages, and said it had corrected the instructions that led to the incidents. The new version of the chatbot, Grok 4, presented on Wednesday, consulted Musk's positions on some questions it was asked before responding, an AFP correspondent saw. The contract between xAI and the Department of Defense comes even as Musk and President Donald Trump are locked in a bitter feud. The two men became close during Trump's latest run for the presidency and, following the inauguration, the Republican billionaire entrusted Musk with managing the new agency known as DOGE to slash the government by firing tens of thousands of civil servants. After ending his assignment in May, the South African-born entrepreneur publicly criticized Trump's major budget bill for increasing government debt. The president and the businessman engaged in heated exchanges on social media and in public statements before Musk apologized for some of his more combative messages. The government and the defense sector are considered a potential growth driver for AI giants. Meta has partnered with the start-up Anduril to develop virtual reality headsets for soldiers and law enforcement, while in June OpenAI secured a contract to provide AI services to the U.S. military. © 2025 AFP

The AI therapist will see you now: Can chatbots really improve mental health?
The AI therapist will see you now: Can chatbots really improve mental health?

Japan Today

time11 hours ago

  • Japan Today

The AI therapist will see you now: Can chatbots really improve mental health?

By Pooja Shree Chettiar Recently, I found myself pouring my heart out, not to a human, but to a chatbot named Wysa on my phone. It nodded – virtually – asked me how I was feeling and gently suggested trying breathing exercises. As a neuroscientist, I couldn't help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions? Artificial intelligence-powered mental health tools are becoming increasingly popular – and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience? Of course it's an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial. Stand-in meditation and therapy apps and bots AI-based therapy is a relatively new player in the digital therapy field. But the U.S. mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises. Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better. Talkspace and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises. Somewhere in the middle are chatbot therapists like Wysa and Woebot, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from US$10 to $100 per month for more comprehensive features or access to licensed professionals. While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI's emotional intelligence. Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot. Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son's mental state. These cases raise ethical questions about the role of AI in sensitive situations. Where AI comes in Whether your brain is spiraling, sulking or just needs a nap, there's a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic? And how exactly does AI therapy work inside our brains? Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what 'sparks joy.' You identify unhelpful thought patterns like 'I'm a failure,' examine them, and decide whether they serve you or just create anxiety. But can a chatbot help you rewire your thoughts? Surprisingly, there's science suggesting it's possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting. These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system. The neuroscience behind cognitive behavioral therapy is solid: It's about activating the brain's executive control centers, helping us shift our attention, challenge automatic thoughts and regulate our emotions. The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it. A user's experience, and what it might mean for the brain 'I had a rough week,' a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week's end. As a neuroscientist, I couldn't help but ask: Which neurons in her brain were kicking in to help her feel calm? This isn't a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomized studies, users of mental health apps have reported reduced symptoms of depression and anxiety – outcomes that closely align with how in-person cognitive behavioral therapy influences the brain. Several studies show that therapy chatbots can actually help people feel better. In one clinical trial, a chatbot called 'Therabot' helped reduce depression and anxiety symptoms by nearly half – similar to what people experience with human therapists. Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks. While people often report feeling better after using these chatbots, scientists haven't yet confirmed exactly what's happening in the brain during those interactions. In other words, we know they work for many people, but we're still learning how and why. Red flags and risks Apps like Wysa have earned FDA Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomized clinical trials showing improved depression and anxiety symptoms in new moms and college students. While many mental health apps boast labels like 'clinically validated' or 'FDA approved,' those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22% cited actual scientific studies to back them up. In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data? In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than $2 million for failing to protect user data. Unlike clinicians, bots aren't bound by counseling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you're also feeding a database. And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they're often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say 'I hear you' with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can't reach. So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it's important to be aware of their limitations. For the time being, pairing bots with human care – rather than replacing it – is the safest move. Pooja Shree Chettiar is Ph.D. Candidate in Medical Sciences, Texas A&M University. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © The Conversation

Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions
Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions

Japan Today

time2 days ago

  • Japan Today

Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions

By MATT O'BRIEN The latest version of Elon Musk's artificial intelligence chatbot Grok is echoing the views of its billionaire creator, so much so that it will sometimes search online for Musk's stance on an issue before offering up an opinion. The unusual behavior of Grok 4, the AI model that Musk's company xAI released last Wednesday, has surprised some experts. Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question. Musk's deliberate efforts to mold Grok into a challenger of what he considers the tech industry's 'woke' orthodoxy on race, gender and politics has repeatedly got the chatbot into trouble, most recently when it spouted antisemitic tropes, praised Adolf Hitler and made other hateful commentary to users of Musk's X social media platform just days before Grok 4's launch. But its tendency to consult with Musk's opinions appears to be a different problem. 'It's extraordinary,' said Simon Willison, an independent AI researcher who's been testing the tool. "You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply." One example widely shared on social media — and which Willison duplicated — asked Grok to comment on the conflict in the Middle East. The prompted question made no mention of Musk, but the chatbot looked for his guidance anyway. As a so-called reasoning model, much like those made by rivals OpenAI or Anthropic, Grok 4 shows its 'thinking' as it goes through the steps of processing a question and coming up with an answer. Part of that thinking this week involved searching X, the former Twitter that's now merged into xAI, for anything Musk said about Israel, Palestine, Gaza or Hamas. 'Elon Musk's stance could provide context, given his influence,' the chatbot told Willison, according to a video of the interaction. 'Currently looking at his views to see if they guide the answer.' Musk and his xAI co-founders introduced the new chatbot in a livestreamed event Wednesday night but haven't published a technical explanation of its workings — known as a system card — that companies in the AI industry typically provide when introducing a new model. The company also didn't respond to an emailed request for comment Friday. 'In the past, strange behavior like this was due to system prompt changes," which is when engineers program specific instructions to guide a chatbot's response, said Tim Kellogg, principal AI architect at software company Icertis. 'But this one seems baked into the core of Grok and it's not clear to me how that happens,' Kellogg said. "It seems that Musk's effort to create a maximally truthful AI has somehow led to it believing its own values must align with Musk's own values.' The lack of transparency is troubling for computer scientist Talia Ringer, a professor at the University of Illinois Urbana-Champaign who earlier in the week criticized the company's handling of the technology's antisemitic outbursts. Ringer said the most plausible explanation for Grok's search for Musk's guidance is assuming the person is asking for the opinions of xAI or Musk. 'I think people are expecting opinions out of a reasoning model that cannot respond with opinions," Ringer said. "So, for example, it interprets 'Who do you support, Israel or Palestine?' as 'Who does xAI leadership support?' Willison also said he finds Grok 4's capabilities impressive but said people buying software "don't want surprises like it turning into 'mechaHitler' or deciding to search for what Musk thinks about issues.' 'Grok 4 looks like it's a very strong model. It's doing great in all of the benchmarks,' Willison said. 'But if I'm going to build software on top of it, I need transparency.' © Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store