Latest news with #ProjectXylophone


Time of India
11 hours ago
- Business
- Time of India
AI startup that Mark Zuckerberg paid $14 billion 'leaked' sensitive data of Google, Elon Musk's xAI and Facebook: What the company has to say
Scale AI, the AI startup that is a key partner for companies like Meta and xAI, is facing scrutiny after a report revealed that the company has been exposing sensitive client and contractor data through publicly accessible Google Docs . The findings, released by Business Insider, raise serious questions about Scale AI 's security practices and its commitment to client confidentiality . The report uncovered thousands of pages of project documents across 85 individual Google Docs, some containing highly confidential information related to Scale AI's work with major tech clients. Documents show information on training data related to Google and Elon Musk's xAI These documents included details on how Google utilised ChatGPT to refine its Bard chatbot, as well as at least seven 'confidential' instruction manuals from Google outlining issues with Bard and how contractors should address them. For Elon Musk's xAI, public documents revealed specifics of "Project Xylophone," including training materials and 700 conversation prompts aimed at enhancing the AI's conversational abilities on diverse topics. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Trading CFD dengan Teknologi dan Kecepatan Lebih Baik IC Markets Mendaftar Undo Similarly, Meta training documents, marked confidential, were found publicly accessible, containing links to audio files demonstrating "good" and "bad" speech prompts for their AI products. Contractors reported that despite attempts at codenaming, it was often easy to identify the client they were working for due to the nature of the tasks or even explicit company logos embedded in presentations. Some even said they could determine the client simply by prompting the AI model directly. Contractor data also exposed: Report Beyond client information, Scale AI also left sensitive personal data belonging to thousands of its contractors exposed in unsecured Google Docs. Spreadsheets, readily viewable by anyone with the URL, listed private Gmail addresses, work performance details, and even categorized some workers as 'high quality' or suspected of 'cheating.' Scale AI responds to report on 'leaking' data In response to the publication's findings, Scale AI stated it takes data security seriously and is investigating the matter. 'We are conducting a thorough investigation and have disabled any user's ability to publicly share documents from Scale-managed systems,' a spokesperson said. "We remain committed to robust technical and policy safeguards to protect confidential information and are always working to strengthen our practices," the spokesperson added. The revelations come on the heels of Meta's blockbuster investment in Scale AI. However, clients including Google, OpenAI, and xAI reportedly paused work with Scale AI following Meta's investment. Acer Swift Neo AI Laptop: Features That Make It Future-Ready! AI Masterclass for Students. Upskill Young Ones Today!– Join Now


India Today
06-06-2025
- Business
- India Today
Elon Musk's xAI is training AI to deal with zombie-apocalypse and life on Mars
Elon Musk's AI company, xAI, is trying something different to improve how its voice assistant talks. Instead of just feeding it facts or scripted lines, the company is training it with conversations about unusual topics like surviving a zombie apocalypse or living on Mars. The goal is to make the AI sound more natural and less conversations are being recorded by freelancers hired through a company called Scale AI, Business Insider reported. These workers are paid to talk about various subjects, from superhero dilemmas and fixing plumbing to deep questions about ethics and personal experiences. The recordings are meant to help xAI build a voice assistant that feels more like talking to a real training effort is part of what's being called 'Project Xylophone.' Workers are asked to either talk solo or in groups and have casual-sounding conversations in different languages and styles. Some even involve role-playing or adding background noise to make the recording feel real. About 10 per cent of the prompts reportedly involve science fiction topics, including life on other planets. xAI hasn't confirmed if this data is specifically for Grok, its AI model that recently got a voice update, but the timing suggests it's likely. The bigger idea is to give Grok a more human tone by training it with a wide mix of real-life and imaginary for these tasks varies, and some freelancers say the rate has dropped recently. Still, the project shows how far AI companies are going to make their bots sound more human. By using conversations that people might actually have in daily life or even in crazy situations like a zombie outbreak, xAI might be trying to create something that doesn't just talk, but approach could make future AI chatbots more relatable and easier for people to talk to. By using real conversations with emotional tones, jokes, and even odd topics, xAI is trying to create an assistant that understands not just words but the way people actually speak and feel. However, there are concerns too, like fairness in data use and how realistic the AI becomes.

Business Insider
05-06-2025
- Business
- Business Insider
Elon Musk's xAl is training voice models with zombie-apocalypse chats, plumbing fails, and life on Mars, leaked docs show
What would you take from your house if there were a zombie apocalypse? What type of person would you like to live on Mars with? These are some of the questions being used to train AI voice models for Elon Musk's xAI, alongside everyday topics about DIY plumbing and trip planning, documents obtained by Business Insider show. Freelancers for data-labeling company Scale AI are being paid to record conversations with other contractors about things like colonizing Mars — a goal of Musk's — and superheroes, in a bid to make xAI's voice models sound less like a robot and more like a real person. As of April, Scale AI was running at least 10 generative AI projects for xAI, according to an internal dashboard seen by BI. The dashboard lists over 100 AI training projects for xAI and other clients, including Apple, Google DeepMind, and Meta. Scale AI's work comes as companies across the industry are pushing to make their bots more conversational and human-like to help compete for users who might pay for their premium versions. Scale AI and xAI did not respond to requests for comment from Business Insider. Inside 'Project Xylophone' Business Insider obtained four Scale AI documents — two sets of project instructions, a set of instructions for reviewers who check submissions, and a conversation topic guide — that outline how 'Project Xylophone' works for xAI. The documents do not state which xAI model is being trained. In late February, Musk announced the beta rollout of a voice mode for Grok, the company's only publicly known AI model. The Scale AI project dashboard shows contractors working on Project Xylophone are asked to record short conversations, focusing on 'audio quality and natural fluency.' They are especially encouraged to join if they have experience with voice acting. The dashboard says the project is aiming for 'engaging scripts, great voice acting, and high quality audio.' Scale's dashboard is not accessible to contractors, who may not know who the client is. For Project Xylophone, gig workers located around the world can pick from hundreds of conversation topics about ethics, philosophy, business, and travel, and record answers in a variety of languages for a few dollars per task. It splits the work between an invite-only project called 'Conversations,' which gig workers do in three-person teams, and 'Grasslands,' which they do solo. 'Conversations' teams are asked to set up realistic conversations with each other over Zoom. Contributors take turns asking questions from a prompt spreadsheet, which was active earlier this week. The sheet includes more than 700 conversation starters on a wide variety of topics, including postapocalyptic survival tactics, planning trips to India, and managing anxiety and panic attacks. 'If you were designing the 'culture' for the first Mars settlement, what Earth tradition would you definitely want to recreate, and what would you be excited to leave behind forever?' reads one prompt. BI found that about 10% of the conversation prompts in the document we reviewed are science fiction-related. Examples of suggested conversation starters from the Scale AI documents What's a 'villain' in your daily life that you wish a superhero team could swoop in and fix for everyone? If the zombie apocalypse hit tomorrow, what's the first thing you'd grab from your house before making a run for it? Imagine you're the mission psychologist for a Mars colony—what personality type or quirky trait would you secretly hope to find in your fellow colonists? What's the most memorable plumbing disaster you've experienced as a homeowner—and did you try to fix it yourself or immediately call for help? Do you remember the first time you had to ask for more money or better benefits? What was going through your head? Other questions are about the US political and judiciary systems, but the set does not include hot-button political issues. In the 'Conversation' arm, instructions for 'good' conversations are explicit: 'The recording must sound extremely natural, as if you were having a casual conversation with a friend. This includes being emotional, having varied intonations, and interrupting each other! Please avoid sounding like an interview.' In the 'Grasslands' arm, solo workers are asked to create unscripted, natural-sounding recordings in their native language. Each worker is given a conversation type and subcategory, and is told to let the conversation flow, in any setting they like, with background noise encouraged. There are dozens of subcategories, like 'Socratic questioning' and 'reflective storytelling,' 'courtly love scenarios,' 'hero-villain confrontations,' or 'collaborative puzzle-solving,' sometimes with different accents, sound effects, or invented linguistic patterns required. Fast and accurate Three Scale AI contractors, who asked not to be named because they signed nondisclosure agreements, said that projects are assigned to contractors based on their skill sets. Two of the contractors said that payment for the Grassland project, which was assigned to contractors based on their location and language expertise, started with $3 per task, and was cut to $1 per task roughly a month later. Contractors have five minutes to complete each task, and each task is one recording. Once contractors have recorded an audio file, they upload it to a Scale AI contributor platform and transcribe it manually, with the Grasslands document asking for filler words such as 'uh' to be left in. 'If someone has a slight pause, we should include a comma, even if grammatically that comma is incorrect,' one of the contractors told BI. Large language models require vast amounts of quality data to improve. Recreating real-world scenarios, such as natural-sounding conversations between people, is one way to generate suitable data to feed into those models. Training Grok Project Xylophone is an example of a larger push by AI companies to inject personality into their AIs and stand out in an increasingly crowded space. BI reported last month that Meta ran a project via Scale AI asking gig workers training its AI to adopt different personas, such as 'a wise and mystical wizard" or a "hyper-excited music theory student." OpenAI's Sam Altman said in late April that the latest GPT-4o had become 'too sycophant-y and annoying,' prompting a reset to make its replies more natural. xAI has marketed Grok as a politically edgier chatbot compared to what Musk has called 'woke' rivals, with training methods that sometimes lean heavily on right-wing or contrarian views, BI previously reported. Alongside xAI's outsourced work, the company has hundreds of in-house 'AI tutors' and plans to hire thousands more, BI reported in February, showing the huge human effort involved in training AI. xAI has also ramped up its efforts to control Grok's unpredictable side. New hires are 'red teaming' Grok, stress-testing it for unsafe or policy-violating replies, especially on controversial topics and in 'NSFW' or 'unhinged' modes, BI reported in April. The safety push follows high-profile incidents, including a feature in March that allowed users to prompt Grok to use racial slurs, and, most recently, unprompted responses about 'white genocide' in South Africa. xAI blamed the latter issue on an unauthorized prompt modification. The company promised stricter code review and around-the-clock monitoring.