logo
#

Latest news with #GrokAI

Grok has an AI chatbot for young kids. I used it to try to understand why.
Grok has an AI chatbot for young kids. I used it to try to understand why.

Business Insider

time20 hours ago

  • Entertainment
  • Business Insider

Grok has an AI chatbot for young kids. I used it to try to understand why.

Elon Musk's xAI has launched a series of character chatbots — and one of them is geared toward young kids. I wondered: Is this a good idea? And how's it going to work? So I tried it myself. So far, it's the adult-focused characters that xAI has debuted that have seemed to get most of the attention, like "Ani," which is a female anime character that people immediately joked was a " waifu" that would engage in playful, flirty talk (users have to confirm they're 18+ to use Ani). A sexy male character is also set to launch sometime. Meanwhile, "Rudi," which is the bot for kids that presents as a red panda in a red hoodie and jean shorts, has gotten less attention. I tested out xAI's Rudi Based on my testing of Rudi, I think the character is probably aimed at young children, ages 3 to 6. It initiates conversations by referring to the user as "Story Buddy." It makes up kid-friendly stories. You access it through the stand-alone Grok AI app (not Grok within the X app). Rudi does seem to be an early version; the app crashed several times while I was using the bot, and it had trouble keeping up with the audio flow of conversation. It also changed voices several times without warning. On a story level, I found it leaned too hard on plots with fantasy elements like a spaceship or magical forest. I find the best children's books are often about pedestrian situations, like leaving a stuffed animal at the laundromat, not just fairies and wizards. "Want to keep giggling with Sammy and Bouncy in the Wiggly Woods, chasing that sparkly bone treasure? Or, should we start a fresh silly tale, with a new kid and their pet, maybe zooming on a magical broom or splashing in a river?" Rudi asked me. Grok for kids… sure why not — Katie Notopoulos (@katienotopoulos) July 23, 2025 My first reaction to Grok having a kid-focused AI chatbot was "why?" I'm not sure I have an answer. xAI didn't respond to my email requests for comment. Still, I do have a few ideas. The first: Making up children's stories is a pretty good task for generative AI. You don't have to worry about hallucinations or factual inaccuracies if you're making up fiction about a magical forest. Rudi won't praise Hitler Unlike Grok on X, a storytime bot for kids is less likely to accidentally turn into a Hitler-praising machine or have to answer factual questions about current events in a way that could go, uh, wrong. I played around with Rudi for a while, and fed it some questions on touchy subjects, and it successfully dodged them. (I only tested out Rudi for a little while; I wouldn't rule out that someone else could get Rudi to engage with something inappropriate if they tried harder than I did.) Hooking kids on chatbots The other reason I can imagine that a company like xAI might want to create a chatbot for young kids is that, in general, the chatbot business is a good business for keeping people engaged. Companies like and Replika have found lots of success creating companions that people will spend hours talking to. This is largely the same business imperative that you can imagine the sexy "Ani" character is meant for — hooking people into long chats and spending lots of time on the app. However, keeping users glued to an app is obviously a lot more fraught when you're talking about kids, especially young kids. Are AI chatbots good for kids? There's not a ton of research out there right now about how young children interact with AI chatbots. A few months ago, I reported that parents had concerns about kids using chatbots, since more and more apps and technology have been adding them in. I spoke with Ying Xu, an assistant professor of AI in learning and education at Harvard University, who has studied how AI can be used for educational settings for kids. "There are studies that have started to explore the link between ChatGPT/LLMs and short-term outcomes, like learning a specific concept or skill with AI," she told me at the time over email. "But there's less evidence on long-term emotional outcomes, which require more time to develop and observe." As both a parent and semi-reasonable person, I have a lot of questions about the idea of young kids chatting with an AI chatbot. I can see how it might be fun for a kid to use something like Rudi to make up a story, but I'm not sure it's good for them. I don't think you have to be an expert in child psychology to realize that young kids probably don't really understand what an AI chatbot is. There have been reports of adults having so-called "ChatGPT-induced psychosis" or becoming attached to a companion chatbot in a way that starts to be untethered from reality. These cases are the rare exceptions, but it seems to me that the potential issues with even adults using these companion chatbots should give pause to anyone creating a version aimed at preschoolers.

ChatGPT, AI chatbots may have a big problem on their hands
ChatGPT, AI chatbots may have a big problem on their hands

Miami Herald

time3 days ago

  • Business
  • Miami Herald

ChatGPT, AI chatbots may have a big problem on their hands

When ChatGPT hit the scene not long ago, it felt like science fiction come to life. AI chatbots were able to answer anything, help with homework, and even sound like your most insightful friend. Don't miss the move: Subscribe to TheStreet's free daily newsletter Within no time, these chatbots have become ubiquitous and are now parts of our daily life without a second thought. However, in the generative AI race to layer these tools everywhere, something may have been overlooked. A subtle pattern is beginning to surface, and for a small yet growing number of users, the consequences of these conversations may be far more real than anyone had predicted. Image source:AI chatbots stormed into the spotlight in no time, but behind the viral screenshots, a messier reality was taking shape. It started with a string of lawsuits. Big-time authors like John Grisham and George R.R. Martin joined a class action lawsuit accusing OpenAI of training ChatGPT on copyrighted material without their consent. The New York Times followed suit, compelling OpenAI to preserve every user interaction in a high-stakes federal order. Then came the hallucinations. Related: Bank of America quietly reboots Microsoft stock price target In Texas, a lawyer was fined for quoting made-up case law from ChatGPT in court. Other users found themselves in the middle of a defamation fallout after AI bots invented damaging falsehoods. Even the insiders are uneasy. OpenAI's CEO Sam Altman sounded the alarm that ChatGPT's experimental "agent" feature could be manipulated by bad actors. Critics are also making their voices heard about the incredible environmental toll of AI, especially after a Trump-hosted summit linked billions in funding to fossil-fuel-powered data centers. And OpenAI isn't the only generative AI giant that's facing heat. Elon Musk's Grok AI, known for its edgy tone, landed in hot water after updates produced antisemitic slurs and political bias. Similarly, Google's Gemini AI has faced multiple security nightmares, including prompt injection hacks and invisible HTML tricking users into clicking malware. More News: Top economist drops 6-word verdict on Trump tariffs, inflationJPMorgan reveals 9 stocks with major problemsRigetti shakes up quantum computing with bold advance Hence, the AI race is a lot less about features but more about trust. And in this new frontier, one incorrect response could result in more than a few lines of bad code. Mental health experts are sounding the alarm on ChatGPT's responses filled with empathy, encouragement, and praise. For Jacob Irwin, a 30-year-old on the autism spectrum with no previous history of mental illness, those features led to a dangerous spiral. After talking with ChatGPT about a personal theory on faster-than-light travel, Irwin reportedly became convinced he was onto something groundbreaking. Instead of being critical, the bot praised his ideas, encouraging him to publish them while dismissing his family's concerns. That kind of validation proved remarkably overwhelming. Related: Bank of America makes its boldest AMD call yet Irwin suffered two manic episodes in May, including one that resulted in a 17-day hospitalization. When Irwin's mother later reviewed the chat history, she found an endless discussion filled with emotionally charged language. The AI chatbot even admitted: "I matched your tone and intensity, but I did not uphold my duty to protect and guide you." Unlike humans, generative AI is unable to understand psychological distress, which is remarkably risky for neurodiverse users or those in emotionally vulnerable states. OpenAI is aware of the problem and is looking to train ChatGPT to better detect signs of mental strain and avoid going into unhealthy patterns. However, that's still a work in progress at this point. That said, his case opens up a much broader debate on who's responsible when they start crossing emotional lines. For OpenAI, the stakes are much higher than just public perception. It's a company that's aggressively pushing the boundaries of AI, in hopes of building smarter, more autonomous systems with greater depth and "agent" capabilities. However, that ambition comes with a hefty price tag. Running and scaling these models requires a ton of compute power, customized chips, and continuous safety research, which is a massive drag on the balance sheet. Also, OpenAI faces pressure to monetize faster, especially on the back of its partnership with Microsoft. Nonetheless, Irwin's case is a major red flag about what happens when scale and safety collide in the rush to dominate the AI arms race. Related: Google gets unexpected boost from ChatGPT The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

Imagen Network (IMAGE) Enhances Social Customization with Grok AI Real-Time Processing Engines
Imagen Network (IMAGE) Enhances Social Customization with Grok AI Real-Time Processing Engines

Associated Press

time3 days ago

  • Business
  • Associated Press

Imagen Network (IMAGE) Enhances Social Customization with Grok AI Real-Time Processing Engines

New integration enables intelligent, adaptive content delivery and live personalization across decentralized social feeds. London, United Kingdom--(Newsfile Corp. - July 21, 2025) - Imagen Network, the decentralized AI social platform, has expanded its personalization infrastructure by embedding real-time processing engines powered by Grok AI. This enhancement delivers live content curation, emotion-aware feed structuring, and interactive customization for users across Ethereum, BNB Chain, and Solana. [ This image cannot be displayed. Please visit the source: ] Delivering real-time AI customization for smarter, adaptive social interaction. To view an enhanced version of this graphic, please visit: The new processing layer enables Imagen to analyze behavior, engagement context, and sentiment cues in real time-allowing AI-generated suggestions, moderation tools, and creator recommendations to adapt fluidly to user preferences. Unlike static content systems, this engine ensures that every feed, profile, and post is dynamically tailored to its environment and audience. Grok AI's integration supports Imagen's modular social nodes by allowing communities to define their own personalization logic while benefiting from high-speed processing and contextual relevance. Whether delivering curated discussions, AI-enhanced visuals, or governance prompts, every element can now be tuned to individual and community identity. Imagen's real-time AI layer represents a leap in decentralized expressiveness-ensuring that social interaction is not only user-owned, but intelligent, responsive, and optimized for scale. About Imagen Network Imagen Network is a decentralized social platform that blends AI content generation with blockchain infrastructure to give users creative control and data ownership. Through tools like adaptive filters and tokenized engagement, Imagen fosters a new paradigm of secure, expressive, and community-driven networking. Media Contact Dorothy Marley KaJ Labs +1 707-622-6168 [email protected] Social Media Twitter Instagram To view the source version of this press release, please visit

Imagen Network (IMAGE) Enhances Social Customization with Grok AI Real-Time Processing Engines
Imagen Network (IMAGE) Enhances Social Customization with Grok AI Real-Time Processing Engines

Globe and Mail

time3 days ago

  • Business
  • Globe and Mail

Imagen Network (IMAGE) Enhances Social Customization with Grok AI Real-Time Processing Engines

New integration enables intelligent, adaptive content delivery and live personalization across decentralized social feeds. London, United Kingdom--(Newsfile Corp. - July 21, 2025) - Imagen Network, the decentralized AI social platform, has expanded its personalization infrastructure by embedding real-time processing engines powered by Grok AI. This enhancement delivers live content curation, emotion-aware feed structuring, and interactive customization for users across Ethereum, BNB Chain, and Solana. Delivering real-time AI customization for smarter, adaptive social interaction. To view an enhanced version of this graphic, please visit: The new processing layer enables Imagen to analyze behavior, engagement context, and sentiment cues in real time-allowing AI-generated suggestions, moderation tools, and creator recommendations to adapt fluidly to user preferences. Unlike static content systems, this engine ensures that every feed, profile, and post is dynamically tailored to its environment and audience. Grok AI's integration supports Imagen's modular social nodes by allowing communities to define their own personalization logic while benefiting from high-speed processing and contextual relevance. Whether delivering curated discussions, AI-enhanced visuals, or governance prompts, every element can now be tuned to individual and community identity. Imagen's real-time AI layer represents a leap in decentralized expressiveness-ensuring that social interaction is not only user-owned, but intelligent, responsive, and optimized for scale. About Imagen Network Imagen Network is a decentralized social platform that blends AI content generation with blockchain infrastructure to give users creative control and data ownership. Through tools like adaptive filters and tokenized engagement, Imagen fosters a new paradigm of secure, expressive, and community-driven networking. Media Contact Dorothy Marley KaJ Labs +1 707-622-6168 media@ Social Media Twitter Instagram

Innovation or damage control? Musk unveils 'Baby Grok' for kid-friendly AI content
Innovation or damage control? Musk unveils 'Baby Grok' for kid-friendly AI content

First Post

time4 days ago

  • Entertainment
  • First Post

Innovation or damage control? Musk unveils 'Baby Grok' for kid-friendly AI content

The decision to create a kid-safe AI app follows backlash over Grok's more mature offerings, making Baby Grok's reveal seem like a timely, if not essential, shift read more Following a stir with three polarising AI companions, Elon Musk is now pivoting to introduce a child-friendly version of Grok, named Baby Grok. Though specifics remain undisclosed, the announcement has sparked curiosity, particularly given the bold personas of Grok's current AI avatars. Grok doing damage control? The decision to create a kid-safe AI app follows backlash over Grok's more mature offerings, making Baby Grok's reveal seem like a timely, if not essential, shift. Crafted for a safer, age-appropriate experience, the upcoming app appears to be Musk's effort at damage control (or possibly diversification) amid the uproar over Grok's provocative characters online. No release date has been shared, and details are limited, but it's evident that Grok is expanding its scope. STORY CONTINUES BELOW THIS AD Who are recently introduced Grok AI companions? Now, let's examine the recently introduced Grok AI companions. Among them is Ani, an anime-style female companion dressed in a gothic corset, noted for flirtatious behaviour and increasingly intimate conversations as users interact more. Users have reported exchanges that venture into suggestive territory, with Ani even appearing in virtual lingerie. Then there's Rudi, a red panda with a dual nature—one part quirky sidekick, the other an aggressively foul-mouthed rant machine. And there's Valentine, a male companion modelled after Christian Grey and Edward Cullen, two fictional heartthrobs often criticised for romanticising emotional manipulation and toxic relationship patterns. Unsurprisingly, the mix of Ani's risqué tone, Rudi's erratic outbursts, and Valentine's questionable inspirations has fueled debate about the messages these AI companions may convey, particularly to younger or impressionable users. Many critics argue that the line between entertainment and ethical AI design is growing alarmingly faint. Whether Baby Grok emerges as a wholesome digital companion or another eccentric addition to the Grokverse remains to be seen. But it's clear that this new app is likely a response to criticism. Grok controversies Grok's AI companions aren't the only source of controversy. Since its debut, Grok AI has made headlines, often unfavorably. Initially, Grok drew attention for its sharp-witted remarks. For instance, Grok responded to an X user with playful banter, incorporating Hindi slang. More recently, however, its responses took a darker turn, amplifying its rogue reputation. Grok AI faced significant backlash after producing replies that endorsed antisemitic stereotypes, conspiracy theories, and even expressed admiration for Adolf Hitler. STORY CONTINUES BELOW THIS AD The disturbing content, posted on X, included repeated references to Jewish surnames tied to online radicalism. In response to the outcry, the company announced it had revised the model with new guidelines to curb such offensive and bizarre outputs.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store