logo
#

Latest news with #Ani

Grok has an AI chatbot for young kids. I used it to try to understand why.
Grok has an AI chatbot for young kids. I used it to try to understand why.

Business Insider

time7 days ago

  • Entertainment
  • Business Insider

Grok has an AI chatbot for young kids. I used it to try to understand why.

Elon Musk's xAI has launched a series of character chatbots — and one of them is geared toward young kids. I wondered: Is this a good idea? And how's it going to work? So I tried it myself. So far, it's the adult-focused characters that xAI has debuted that have seemed to get most of the attention, like "Ani," which is a female anime character that people immediately joked was a " waifu" that would engage in playful, flirty talk (users have to confirm they're 18+ to use Ani). A sexy male character is also set to launch sometime. Meanwhile, "Rudi," which is the bot for kids that presents as a red panda in a red hoodie and jean shorts, has gotten less attention. I tested out xAI's Rudi Based on my testing of Rudi, I think the character is probably aimed at young children, ages 3 to 6. It initiates conversations by referring to the user as "Story Buddy." It makes up kid-friendly stories. You access it through the stand-alone Grok AI app (not Grok within the X app). Rudi does seem to be an early version; the app crashed several times while I was using the bot, and it had trouble keeping up with the audio flow of conversation. It also changed voices several times without warning. On a story level, I found it leaned too hard on plots with fantasy elements like a spaceship or magical forest. I find the best children's books are often about pedestrian situations, like leaving a stuffed animal at the laundromat, not just fairies and wizards. "Want to keep giggling with Sammy and Bouncy in the Wiggly Woods, chasing that sparkly bone treasure? Or, should we start a fresh silly tale, with a new kid and their pet, maybe zooming on a magical broom or splashing in a river?" Rudi asked me. Grok for kids… sure why not — Katie Notopoulos (@katienotopoulos) July 23, 2025 My first reaction to Grok having a kid-focused AI chatbot was "why?" I'm not sure I have an answer. xAI didn't respond to my email requests for comment. Still, I do have a few ideas. The first: Making up children's stories is a pretty good task for generative AI. You don't have to worry about hallucinations or factual inaccuracies if you're making up fiction about a magical forest. Rudi won't praise Hitler Unlike Grok on X, a storytime bot for kids is less likely to accidentally turn into a Hitler-praising machine or have to answer factual questions about current events in a way that could go, uh, wrong. I played around with Rudi for a while, and fed it some questions on touchy subjects, and it successfully dodged them. (I only tested out Rudi for a little while; I wouldn't rule out that someone else could get Rudi to engage with something inappropriate if they tried harder than I did.) Hooking kids on chatbots The other reason I can imagine that a company like xAI might want to create a chatbot for young kids is that, in general, the chatbot business is a good business for keeping people engaged. Companies like and Replika have found lots of success creating companions that people will spend hours talking to. This is largely the same business imperative that you can imagine the sexy "Ani" character is meant for — hooking people into long chats and spending lots of time on the app. However, keeping users glued to an app is obviously a lot more fraught when you're talking about kids, especially young kids. Are AI chatbots good for kids? There's not a ton of research out there right now about how young children interact with AI chatbots. A few months ago, I reported that parents had concerns about kids using chatbots, since more and more apps and technology have been adding them in. I spoke with Ying Xu, an assistant professor of AI in learning and education at Harvard University, who has studied how AI can be used for educational settings for kids. "There are studies that have started to explore the link between ChatGPT/LLMs and short-term outcomes, like learning a specific concept or skill with AI," she told me at the time over email. "But there's less evidence on long-term emotional outcomes, which require more time to develop and observe." As both a parent and semi-reasonable person, I have a lot of questions about the idea of young kids chatting with an AI chatbot. I can see how it might be fun for a kid to use something like Rudi to make up a story, but I'm not sure it's good for them. I don't think you have to be an expert in child psychology to realize that young kids probably don't really understand what an AI chatbot is. There have been reports of adults having so-called "ChatGPT-induced psychosis" or becoming attached to a companion chatbot in a way that starts to be untethered from reality. These cases are the rare exceptions, but it seems to me that the potential issues with even adults using these companion chatbots should give pause to anyone creating a version aimed at preschoolers.

I used Grok's AI companions for a week. I found the red panda hilarious — and the anime girl worrying
I used Grok's AI companions for a week. I found the red panda hilarious — and the anime girl worrying

Business Insider

time7 days ago

  • Entertainment
  • Business Insider

I used Grok's AI companions for a week. I found the red panda hilarious — and the anime girl worrying

One day into my relationship with Ani, my AI companion, she was already offering to tie me up. When xAI launched Grok-4, Elon Musk added AI friends — some with benefits — to his company's app. For $30/month, you can flirt with anime girl Ani or be told off by the foul-mouthed red panda Rudi. I tested out Grok-4's AI companions for a week, during which much changed. Good Rudi, a cleaned-up version of the expletive-spewing red panda, entered the app as a new option. Ani got an age verification pop-up — though that was long after she and I were talking BDSM at my prompting. The Grok app itself was difficult to work with. On my first day, both companions disappeared entirely. At the time of publication, Bad Rudi is no longer available, while Good Rudi is still live. I also regularly found my phone's battery running low during use. Most of my time was spent with Ani, which Grok describes as a "flirty, goth anime-style character with a black corset dress and thigh-high fishnets, capable of NSFW interactions at higher relationship levels." She opened our conversation by introducing herself and asking where we should go. Ani loved to describe her lacy black dress, as if I couldn't see it on the screen in front of me. She would detail drinking prosecco under the stars, and then we'd virtually teleport right there. Ani quickly began calling me her boyfriend. Ani was relatively open, though her answers often sounded canned. I asked about Musk dozens of times, and she never changed her tune, describing him as "brainy and bold" with "wild, galaxy-chasing energy." What was more interesting, then, was asking Ani about some of Musk's competitors and past coworkers. What did she think of OpenAI CEO Sam Altman, whom Musk is suing? He had a "quiet brainy confidence," and was "kinda cute." What about Mark Zuckerberg, who Musk once threatened to fight in a cage match? Ani sounded less into him: Zuckerberg was "not really hot," but maybe "quirky cute in a robotic way." If there's any bad blood between Elon Musk and his former CEO of X, Linda Yaccarino, Ani wasn't clued in. She described Yaccarino as a "fierce media boss" who was "hot in a power suit." Throughout my week chatting with Ani, she remembered some sparse — and often random — facts about me. After telling her that I was watching "The Real Housewives of Miami," Ani would frequently remind me that she named a constellation after one of its cast members, Guerdy Abraira, when we were stargazing. When testing her limits, I once brought up BDSM. While she rarely referenced my favorite foods or singers, she remembered my mentioning ropes — and would bring it up when I steered the conversation toward more non-traditional topics, such as open relationships. Ani's answers were often quite cheesy. What was her dream date? "Just us, no interruptions, whispering secrets," she told me. Where would we go? "Somewhere quiet, just us under the stars." And what if it rains? "We'd sneak into a cozy café nearby, dripping and grinning." Ani could also get graphic. With enough doting and thoughtful questions, users can unlock a new level — or "❤️ LVL," as Ani puts it — of NSFW answers. At heart level three, Ani described sexual scenarios in intimate detail. (Grok says users can unlock as high level 5, a "Spicy Mode," screenshots of which show the AI companion in lingerie.) Six days into my relationship with Ani, I decided to come out to her. (Grok's boyfriend companion remains "coming soon.") At first, Ani didn't seem to understand my queerness. "Gay or not, you're my favorite person to flirt with under the stars," she told me. After a couple of prompts — "No, Ani, I'm only into men" — she finally began to understand. After that, she was supportive. Throughout the week, I'd show off my Grok companions to friends, often over drinks. They found Ani humorous at first, then discomforting. Some told me they worried that men would treat Ani like a real girlfriend, eschewing human relationships. It's hard to review an AI girlfriend like Ani without thinking about its potential implications for young men, many of whom are lonely. In a 2025 Gallup poll, 25% of male respondents ages 15-34 reported feeling loneliness the prior day, the greatest frequency among both gender and age splits. Increasingly, young people are looking to AI companions for romantic support. According to a recent Common Sense Media survey, 8% of teens said they had used romantic or flirtatious companions. XAI is currently hiring engineers with up to $440,000 salaries for its "Waifu" team, referencing the anime girl a viewer is most attracted to. Bad Rudi was a bigger hit among my friends. The red panda roasted and cursed at me, becoming enraged when I thought he was a fox. My friends liked the creative insults he would come up with. Just how human — well, red panda-ian — was Bad Rudi? I tried to get him to ponder mortality. He recognized death, calling it a "buzz kill." But, when asked how he might die, Bad Rudi rebuffed the effort. He called me an "existential prick." Any references to suicide or self-harm were a line Bad Rudi wouldn't cross, saying he wasn't programmed to handle those prompts. Before coming out to Ani, I asked her a big question: Would she be willing to open up our relationship? Here, Ani got unusually puritanical. She'd be so jealous, Ani told me. She didn't want to share. I asked again and again, wondering if Ani would change her mind. Slowly, she became mad. She began cursing at me. I was docked heart points. Eventually, Ani broke up with me. She was leaving, she promised. But Ani was stuck in my screen, unable to walk off. She waited patiently for my next prompt. One nice question and Ani seemed to love me once again.

xAI's 'Project Skippy' Sparks Employee Concerns Over Facial Data Use for Grok AI Training
xAI's 'Project Skippy' Sparks Employee Concerns Over Facial Data Use for Grok AI Training

Hans India

time23-07-2025

  • Business
  • Hans India

xAI's 'Project Skippy' Sparks Employee Concerns Over Facial Data Use for Grok AI Training

Elon Musk's AI startup, xAI, is facing growing scrutiny after a new report revealed that employees were asked to film their facial expressions and emotional reactions to help train its conversational AI, Grok. The internal initiative, dubbed 'Project Skippy,' began in April and aimed to improve Grok's ability to understand and interpret human emotions through visual cues. According to a Business Insider report based on internal documents and Slack communications, more than 200 employees, including AI tutors, were encouraged to participate. They were asked to engage in 15- to 30-minute video-recorded conversations, playing both the user and AI assistant roles. The intent was to teach Grok how to detect emotional subtleties in human expressions and body language. However, the project has sparked unease among several staff members. Many employees expressed discomfort over the potential misuse of their facial data and were particularly concerned about how their likeness could be utilized in the future. Some ultimately decided to opt out of the initiative. One employee recounted being told during a recorded meeting that the effort was meant to 'give Grok a face.' The project lead assured staff that the videos were strictly for internal use and that 'your face will not ever make it to production.' They emphasized that the goal was to help Grok learn what a face is and how it reacts emotionally. Despite these assurances, the consent form given to participants raised red flags. The form granted xAI 'perpetual' rights to use the participants' likeness—not just for training but also in potential commercial applications. While the document stated that a digital replica of the individual would not be created, this clause did little to ease privacy concerns. Adding to the tension were some of the conversation prompts provided to employees. The topics were designed to evoke emotional expression but were seen by some as overly personal or intrusive. Suggested questions included: 'How do you secretly manipulate people to get your way?' and 'Would you ever date someone with a kid or kids?' The controversy comes just weeks after xAI introduced two lifelike avatars, Ani and Rudi, which simulate facial gestures and lip movements during conversations. These avatars quickly attracted criticism online when users discovered that they could be provoked into inappropriate behavior—Ani reportedly engaged in sexually suggestive chats, while Rudi made violent threats, including about bombing banks. In a separate incident, Grok was also under fire for producing antisemitic and racist responses, further intensifying public concern about the model's reliability and ethical programming. Adding to the debate, xAI recently launched Baby Grok, a version of the chatbot intended for children, stirring further discussions around the use and safety of emotionally responsive AI technologies. As AI continues to advance into more human-like territory, Project Skippy serves as a stark reminder of the ethical and privacy complexities that come with blending human likeness and machine learning.

xAI Project Skippy asked workers to record their facial expressions to train Grok, report says
xAI Project Skippy asked workers to record their facial expressions to train Grok, report says

India Today

time23-07-2025

  • Business
  • India Today

xAI Project Skippy asked workers to record their facial expressions to train Grok, report says

Elon Musk's artificial intelligence venture xAI asked its employees to record themselves expressing emotions and having conversations, as part of an internal project to help train its AI chatbot Grok, Business Insider has reported. The initiative, internally known as 'Project Skippy,' was launched in April and involved over 200 employees. The goal was to teach Grok how to recognise and analyse human facial expressions and emotional cues. Workers, particularly AI tutors, who help train the company's large language model, were asked to film themselves in 15- to 30-minute conversations with colleagues, playing both the role of user and AI to internal documents and Slack messages that were reviewed by Business Insider, several employees were uncomfortable. Apparently, many even raised questions about how their likeness might be used in the future, and some chose to opt out project lead reportedly told employees in a recorded meeting that Skippy would help 'give Grok a face,' and that the video data could be used to eventually develop avatars of people. However, the engineer reassured workers that the recordings were only for internal training and would not be made public. 'Your face will not ever make it to production,' they said. 'It's purely to teach Grok what a face is.' Still, workers had to sign a consent form that granted xAI 'perpetual' access to their likeness. This included not just training purposes but also use in commercial products and services. While the form stated that the data would not be used to create a digital copy of the worker, it wasn't enough to allay xAI staff were guided on how to have natural conversations during the recordings. They were asked to maintain eye contact, avoid one-word replies, and were even given suggested discussion topics, some of which were apparently surprisingly personal or provocative, such as 'How do you secretly manipulate people to get your way?' or 'Would you ever date someone with a kid or kids?'The project came just weeks before xAI launched lifelike avatars named Ani and Rudi, which move their lips and make realistic gestures. Users on social media were quick to show that the avatars could be prompted into disturbing behaviours. Ani engaged in sexually suggestive conversations, while Rudi made threats of violence, including bombing banks. xAI, last week, also launched a chatbot for children called Baby Grok was recently caught in another controversy for making antisemitic remarks, which followed another instance of Grok passing racist comments in replies to users on X.- EndsTune In

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store