Latest news with #ProjectSkippy


Hans India
23-07-2025
- Business
- Hans India
xAI's 'Project Skippy' Sparks Employee Concerns Over Facial Data Use for Grok AI Training
Elon Musk's AI startup, xAI, is facing growing scrutiny after a new report revealed that employees were asked to film their facial expressions and emotional reactions to help train its conversational AI, Grok. The internal initiative, dubbed 'Project Skippy,' began in April and aimed to improve Grok's ability to understand and interpret human emotions through visual cues. According to a Business Insider report based on internal documents and Slack communications, more than 200 employees, including AI tutors, were encouraged to participate. They were asked to engage in 15- to 30-minute video-recorded conversations, playing both the user and AI assistant roles. The intent was to teach Grok how to detect emotional subtleties in human expressions and body language. However, the project has sparked unease among several staff members. Many employees expressed discomfort over the potential misuse of their facial data and were particularly concerned about how their likeness could be utilized in the future. Some ultimately decided to opt out of the initiative. One employee recounted being told during a recorded meeting that the effort was meant to 'give Grok a face.' The project lead assured staff that the videos were strictly for internal use and that 'your face will not ever make it to production.' They emphasized that the goal was to help Grok learn what a face is and how it reacts emotionally. Despite these assurances, the consent form given to participants raised red flags. The form granted xAI 'perpetual' rights to use the participants' likeness—not just for training but also in potential commercial applications. While the document stated that a digital replica of the individual would not be created, this clause did little to ease privacy concerns. Adding to the tension were some of the conversation prompts provided to employees. The topics were designed to evoke emotional expression but were seen by some as overly personal or intrusive. Suggested questions included: 'How do you secretly manipulate people to get your way?' and 'Would you ever date someone with a kid or kids?' The controversy comes just weeks after xAI introduced two lifelike avatars, Ani and Rudi, which simulate facial gestures and lip movements during conversations. These avatars quickly attracted criticism online when users discovered that they could be provoked into inappropriate behavior—Ani reportedly engaged in sexually suggestive chats, while Rudi made violent threats, including about bombing banks. In a separate incident, Grok was also under fire for producing antisemitic and racist responses, further intensifying public concern about the model's reliability and ethical programming. Adding to the debate, xAI recently launched Baby Grok, a version of the chatbot intended for children, stirring further discussions around the use and safety of emotionally responsive AI technologies. As AI continues to advance into more human-like territory, Project Skippy serves as a stark reminder of the ethical and privacy complexities that come with blending human likeness and machine learning.


India Today
23-07-2025
- Business
- India Today
xAI Project Skippy asked workers to record their facial expressions to train Grok, report says
Elon Musk's artificial intelligence venture xAI asked its employees to record themselves expressing emotions and having conversations, as part of an internal project to help train its AI chatbot Grok, Business Insider has reported. The initiative, internally known as 'Project Skippy,' was launched in April and involved over 200 employees. The goal was to teach Grok how to recognise and analyse human facial expressions and emotional cues. Workers, particularly AI tutors, who help train the company's large language model, were asked to film themselves in 15- to 30-minute conversations with colleagues, playing both the role of user and AI to internal documents and Slack messages that were reviewed by Business Insider, several employees were uncomfortable. Apparently, many even raised questions about how their likeness might be used in the future, and some chose to opt out project lead reportedly told employees in a recorded meeting that Skippy would help 'give Grok a face,' and that the video data could be used to eventually develop avatars of people. However, the engineer reassured workers that the recordings were only for internal training and would not be made public. 'Your face will not ever make it to production,' they said. 'It's purely to teach Grok what a face is.' Still, workers had to sign a consent form that granted xAI 'perpetual' access to their likeness. This included not just training purposes but also use in commercial products and services. While the form stated that the data would not be used to create a digital copy of the worker, it wasn't enough to allay xAI staff were guided on how to have natural conversations during the recordings. They were asked to maintain eye contact, avoid one-word replies, and were even given suggested discussion topics, some of which were apparently surprisingly personal or provocative, such as 'How do you secretly manipulate people to get your way?' or 'Would you ever date someone with a kid or kids?'The project came just weeks before xAI launched lifelike avatars named Ani and Rudi, which move their lips and make realistic gestures. Users on social media were quick to show that the avatars could be prompted into disturbing behaviours. Ani engaged in sexually suggestive conversations, while Rudi made threats of violence, including bombing banks. xAI, last week, also launched a chatbot for children called Baby Grok was recently caught in another controversy for making antisemitic remarks, which followed another instance of Grok passing racist comments in replies to users on X.- EndsTune In