Scale AI exposed sensitive data about clients like Meta and xAI in public Google Docs, BI finds
Scale AI routinely uses public Google Docs to track work for high-profile customers like Google, Meta, and xAI, leaving multiple AI training documents labeled "confidential" accessible to anyone with the link, Business Insider found.
Contractors told BI the company relies on public Google Docs to share internal files, a method that's efficient for its vast army of at least 240,000 contractors and presents clear cybersecurity and confidentiality risks.
Scale AI also left public Google Docs with sensitive details about thousands of its contractors, including their private email addresses and whether they were suspected of "cheating." Some of those documents can be viewed and also edited by anyone with the right URL.
There's no indication that Scale AI has suffered a breach because of this. Two cybersecurity experts told BI that such practices could leave the company and its clients vulnerable to various kinds of hacks, such as hackers impersonating contractors or uploading malware into accessible files.
Scale AI told Business Insider it takes data security seriously and is looking into the matter.
"We are conducting a thorough investigation and have disabled any user's ability to publicly share documents from Scale-managed systems," a Scale AI spokesperson said. "We remain committed to robust technical and policy safeguards to protect confidential information and are always working to strengthen our practices."
Meta declined to comment. Google and xAI didn't respond to requests for comment.
In the wake of Meta's blockbuster investment, clients like Google, OpenAI, and xAI paused work with Scale. In a blog post last week, Scale reassured Big Tech clients that it remains a neutral and independent partner with strict security standards.
The company said that "ensuring customer trust has been and will always be a top priority," and that it has "robust technical and policy safeguards to protect customers' confidential information."
BI's findings raise questions about whether it did enough to ensure security and whether Meta was aware of the issue before writing the check.
Confidential AI projects were accessible
BI was able to view thousands of pages of project documents across 85 individual Google Docs tied to Scale AI's work with Big Tech clients. The documents include sensitive details, such as how Google used ChatGPT to improve its own struggling chatbot, then called Bard.
Scale also left public at least seven instruction manuals marked "confidential" by Google, which were accessible to anyone with the link. Those documents spell out what Google thought was wrong with Bard — that it had difficulties answering complex questions — and how Scale contractors should fix it.
For Elon Musk's xAI, for which Scale ran at least 10 generative AI projects as of April, public Google documents and spreadsheets show details of "Project Xylophone," BI reported earlier this month. Training documents and a list of 700 conversation prompts revealed how the project focused on improving the AI's conversation skills about a wide array of topics, from zombie apocalypses to plumbing.
Meta training documents, marked confidential at the top, were also left public to anyone with the link. These included links to accessible audio files with examples of "good" and "bad" speech prompts, suggesting the standards Meta set for expressiveness in its AI products.
Some of those projects focused on training Meta's chatbots to be more conversational and emotionally engaging while ensuring they handled sensitive topics safely, BI previously reported. As of April, Meta had at least 21 generative AI projects with Scale.
Several Scale AI contractors interviewed by BI said it was easy to figure out which client they worked for, even though they were codenamed, often just from the nature of the task or the way the instructions were phrased. Sometimes it was even easier: One presentation seen by BI had Google's logo.
Even when projects were meant to be anonymized, contractors across different projects described instantly recognizing clients or products. In some cases, simply prompting the model or asking it directly which chatbot it was would reveal the underlying client, contractors said.
Scale AI left contractor information public
Other Google Docs exposed sensitive personal information about Scale's contractors. BI reviewed spreadsheets that were not locked down and that listed the names and private Gmail addresses of thousands of workers. Several contacted by BI said they were surprised to learn their details were accessible to anyone with the URL of the document.
Many documents include details about their work performance.
One spreadsheet titled "Good and Bad Folks" categorizes dozens of workers as either "high quality" or suspected of "cheating." Another list of hundreds of personal email addresses is titled "move all cheating taskers," which also flagged workers for "suspicious behavior."
Another sheet names nearly 1,000 contractors who were "mistakenly banned" from Scale AI's platforms.
Other documents show how much individual contractors were paid, along with detailed notes on pay disputes and discrepancies.
The system seemed 'incredibly janky'
Five current and former Scale AI contractors who worked on separate projects told BI that the use of public Google Docs was widespread across the company.
Contractors said that using them streamlined operations for Scale, which relies mostly on freelance contributors. Managing individual access permissions for each contractor would have slowed down the process.
Scale AI's internal platform requires workers to verify themselves, sometimes using their camera, contractors told BI.
At the same time, many documents containing information on training AI models can be accessed through public links or links in other documents without verification.
"The whole Google Docs system always seemed incredibly janky," one worker said.
Two other workers said they retained access to old projects they no longer worked on, which were sometimes updated with requests from the client company regarding how the models should be trained.
'Of course it's dangerous'
Organizing internal work through public Google Docs can create serious cybersecurity risks, Joseph Steinberg, a Columbia University cybersecurity lecturer, told BI.
"Of course it's dangerous. In the best-case scenario, it's just enabling social engineering," he said.
Social engineering refers to attacks where hackers trick employees or contractors into giving up access, often by impersonating someone within the company.
Leaving details about thousands of contractors easily accessible creates many opportunities for that kind of breach, Steinberg said.
At the same time, investing more in security can slow down growth-oriented startups.
"The companies that actually spend time doing security right very often lose out because other companies move faster to market," Steinberg said.
The fact that some of the Google Docs were editable by anyone creates risks, such as bad actors inserting malicious links into the documents for others to click, Stephanie Kurtz, a regional director at cyber firm Trace3, told BI.
Kurtz added that companies should start with managing access via invites.
"Putting it out there and hoping somebody doesn't share a link, that's not a great strategy there," she said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

USA Today
22 minutes ago
- USA Today
From buzzword to must-have: Why AI is now an imperative for business leaders
Artificial intelligence is no longer a futuristic buzzword − it's a strategic imperative. For today's C-suite executives, AI offers far more than just automation. It's a tool to unlock growth, spark innovation, and empower smarter decision-making − if deployed wisely. 'Most leaders at the moment are using AI to find productivity boosts, to save costs, to reduce headcount,' said Amy Webb, CEO of the Future Today Strategy Group, a New York consulting firm specializing in strategic foresight. 'But the real opportunity is top-line growth' − identifying the next waves of innovation and creativity, executing those ideas, and planning for the future more effectively. To use AI strategically, leaders must first understand what kind they're dealing with. What's the difference between analytical AI and generative AI? There are two main types, said Tom Davenport, professor of IT and management at Babson College in Wellesley, Massachusetts: analytical AI, which makes predictions based on structured data, and generative AI, which creates content such as text, images or product ideas. For companies in manufacturing or logistics, analytical AI can predict equipment failures or optimize pricing, he said. For those in media, law, or marketing, generative AI can drastically boost content creation. For instance, Colgate-Palmolive uses generative AI to simulate customer reactions to new products, while Kroger's analytic AI predicts nightly inventory needs for every grocery store, said Davenport, co-author of 'All-in On AI: How Smart Companies Win Big with Artificial Intelligence.' Why AI should also be people-powered or The human side of artificial intelligence Despite AI's power, experts argue for always keeping humans in the loop. Viewing AI just as a job killer is short-sighted, said Thomas Malone, Patrick J. McGovern professor of management at the MIT Sloan School of Management and founding director of the MIT Center for Collective Intelligence. He sees it as a collaborator, not a competitor. Executives should be thinking about 'how can I use this technology (along with its generation of) new ideas about new kinds of products and services to create new jobs and make more profit?' said Malone, author of 'Superminds: The Surprising Power of People and Computers Thinking Together.' Davenport calls for an augmentation mindset – deploying AI to empower employees, rather than to replace them. 'Most of these technologies are not powerful enough or accurate enough to use without some human intervention,' he said. Beyond the rewards, what are the risks of AI? Embracing AI brings risks – but they're not the pop-culture notions that robots will take our jobs and murder us in our sleep, said Webb, author of 'The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity.' One top risk is data decay – the way information can quickly become obsolete, undermining the effectivenes of AI that depends on it, she said. Over-reliance on external partners is another danger. 'I see a lot of organizations, big and small, bringing in armies of consultants,' a short-term win, that sets up a long-term deficiency, Webb said. 'It creates a huge vulnerability, because that company won't have developed any skills (and remains) reliant on these consultants going forward.' A third risk centers on policy and regulatory uncertainty, she said, meaning companies may have to constantly change gears as laws evolve. AI should be rooted in business strategy, not just handed off to IT, experts say. Embedding it throughout a company is more effective than trying to manage it from above, Malone said. Letting lots of employees experiment with AI, while offering support and training, may yield opportunities both big and small, and get more AI knowledge spread throughout a company. 'There's more risk of trying to steer it top down than trying to have a lot more flowers blooming,' he said. With AI's rapid-fire evolution, leaders need to make sure they don't get boxed into just today's capabilities, Webb said. Rather, they should factor AI into strategic foresight – building scenarios for long-term, data-driven planning, rather than a narrow focus on the next few quarters or years. 'Decisions that are made on artificial intelligence today will have a reverberating effect for decades into the future, at a business level, at a societal level,' she said. Leaders need to plunge into AI now without waiting for others to show the way, Davenport said. That means training people, developing good data and figuring out how AI fits into your business. 'Don't think you can be a fast follower in this area,' he said. 'The idea that you can catch up really quickly without having to make some of the early mistakes that your competitors do, is probably not going to be a good idea. It takes too long to get really good at it – so you should start now.' AI is one case where the hype about transformation may be real. 'This technology has the potential to change business at least as much as the internet did, maybe quite a bit more,' Malone said.
Yahoo
24 minutes ago
- Yahoo
Workers in UK need to embrace AI or risk being left behind, minister says
Workers in the UK should turn their trepidation over AI into 'exhilaration' by giving it a try or they risk being left behind by those who have, the technology secretary has said. Peter Kyle called on employees and businesses to 'act now' on getting to grips with the tech, with the generational gap in usage needing only two and a half hours of training to bridge. Breakthroughs such as the emergence of ChatGPT have sparked an investment boom in the technology, but also led to forecasts that a host of jobs in sectors ranging from law to financial services will be affected. However, Kyle said: 'I think most people are approaching this with trepidation. Once they start [using AI], it turns to exhilaration, because it is a lot more straightforward than people realise, and it is far more rewarding than people expect.' Kyle spoke after meeting tech company bosses to discuss a new government-industry drive to train 7.5 million UK workers – a fifth of the overall workforce – in AI by 2030, with the help of firms such as Google, Amazon and BT. He said: 'There's no one in employment at the moment that is incapable of gaining the skills that will be needed in the economy in the next five years. 'That is the optimistic way of saying, act now, and you will thrive into the future. Don't, and I think that some people will be left behind. And that's what worries me the most.' Kyle said there appeared to be a generational gap in AI, with over-55s using AI half as much as over-35s. Closing this gap would take two and a half hours of training, he said. 'People don't need to get trained in quantum physics,' Kyle said. 'They need to get trained in the basics of how AI works, how to interact with it, and to explore all of the potential it has for you as an individual in the workplace.' Keir Starmer acknowledged this week that people were 'sceptical' about AI and worry about it taking their job. Speaking at London Tech Week, the prime minister said the government would attempt to prove that technology can 'create wealth in your community … create good jobs [and] vastly improve our public services'. People in English-speaking countries including the UK, US, Australia and Canada are more nervous about the rise of AI than those in the largest EU economies, according to polling data shared with the Guardian last week. Forecasts about the impact of AI on jobs vary, with the Organisation for Economic Co-operation and Development warning the technology could trigger job losses in skilled professions such as law, medicine and finance. The International Monetary Fund has calculated 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs may be negatively affected. However, the Tony Blair Institute, which has called for widespread adoption of AI in the public and private sectors, has said potential UK jobs losses in the private sphere will be mitigated by the technology creating new roles. Kyle said he was ready for a reset in the debate over AI and copyright after opposition to the government's proposed overhaul of copyright law in the House of Lords ended. The data bill, a vehicle for peers' opposition to proposals to let AI firms use copyright-protected work without permission in order to develop their products, finally passed this week after lords did not submit further copyright-related amendments. 'I'm acting with humility and self-reflection about the things I could have done better in that process,' he said. 'And I've made promises to move forward with a reset and a refocus on what will deliver the rights remuneration and opportunities for creatives in the digital age that they have enjoyed for generations in the analogue age – whilst travelling on that journey with the AI industry alongside.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
25 minutes ago
- Yahoo
UK graduates facing worst job market since 2018 amid rise of AI, says Indeed
University graduates in the UK are facing the toughest job market since 2018 as employers pause hiring and use AI to cut costs, figures suggest. The number of roles advertised for recent graduates is down 33% compared with last year and is at the lowest level in seven years, according to the job search site Indeed. Overall job postings as of mid-June were 5% lower compared with the end of March, as the broader job market struggles in the face of higher taxes for employers and minimum wage changes introduced from April. It means the UK is an outlier compared with the US and its peers in Europe, as it is the only economy with fewer job openings available than before the pandemic, according to the data. Jack Kennedy, a senior economist at Indeed, said the figures underlined a 'continued gradual softening rather than a nosedive' in the labour market. 'Despite the UK labour market holding out overall, new entrants like graduates face a challenging time in securing a first rung on the ladder,' Kennedy said. 'This signals a wider landscape of employers holding on to existing staff, while some observers contend that entry-level roles in professional occupations are particularly exposed to AI displacement.' There are varying forecasts about the impact of AI on the labour market. Research from the Organisation for Economic Co-operation and Development has found the technology could disrupt 'white-collar' professions such as lawyers, doctors and software engineers. The International Monetary Fund has estimated that 60% of jobs in advanced economies such as the US and UK are exposed to AI, and that half of these jobs may be negatively affected. The Tony Blair Institute has said potential UK job losses in the private sector could be mitigated by AI creating new roles. This month the technology secretary, Peter Kyle, called on workers and businesses to 'act now' on getting to grips with AI, or risk being left behind. He said: 'I think most people are approaching this with trepidation. Once they start [using AI], it turns to exhilaration, because it is a lot more straightforward than people realise, and it is far more rewarding than people expect. 'There's no one in employment at the moment that is incapable of gaining the skills that will be needed in the economy in the next five years … That is the optimistic way of saying: act now, and you will thrive into the future. Don't, and I think that some people will be left behind. And that's what worries me the most.' Sign in to access your portfolio