logo
San Francisco AI company lays off 200 after Meta's multibillion-dollar deal

San Francisco AI company lays off 200 after Meta's multibillion-dollar deal

Just weeks after Meta invested $14.3 billion into Scale AI and tapped its founder, Alexandr Wang, to lead its new artificial intelligence initiative, the data-labeling startup is cutting roughly 200 full-time employees — about 14% of its staff — and parting ways with 500 contractors worldwide.
The cuts, announced in a company-wide memo on Wednesday by interim CEO Jason Droege, mark a dramatic shift for the once high-flying San Francisco startup, which has played a critical role in preparing data for major AI players including OpenAI, Google and Microsoft.
'The reasons for these changes are straightforward: we ramped up our GenAI capacity too quickly over the past year,' Droege wrote in the memo obtained by multiple media outlets. 'While that felt like the right decision at the time, it's clear this approach created inefficiencies and redundancies. We created too many layers, excessive bureaucracy, and unhelpful confusion about the team's mission.'
Scale AI's generative AI division — the unit at the heart of the layoffs — is being reorganized from 16 pods into five focused teams: code, languages, experts, experimental and audio. The company's go-to-market team will also be consolidated into a single 'demand generation' group.
The company emphasized that it remains well-funded and is preparing to 'significantly increase headcount' in enterprise and government-facing business units later this year.
The layoffs follow an industry-wide reckoning over shifting partnerships.
A spokesperson for the company said affected employees have been provided severance, with full-time roles paid through mid-September.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why I'm Suing OpenAI, the Creator of ChatGPT
Why I'm Suing OpenAI, the Creator of ChatGPT

Scientific American

time28 minutes ago

  • Scientific American

Why I'm Suing OpenAI, the Creator of ChatGPT

'I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones,' wrote New York Times technology columnist Kevin Roose in March, 'and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.' He's right. That's why I recently filed a federal lawsuit against OpenAI seeking a temporary restraining order to prevent the company from deploying its products, such as ChatGPT, in the state of Hawaii, where I live, until it can demonstrate the legitimate safety measures that the company has itself called for from its 'large language model.' We are at a pivotal moment. Leaders in AI development—including OpenAI's own CEO Sam Altman—have acknowledged the existential risks posed by increasingly capable AI systems. In June 2015, Altman stated: 'I think AI will probably, most likely, sort of lead to the end of the world, but in the meantime, there'll be great companies created with serious machine learning.' Yes, he was probably joking—but it's not a joke. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Eight years later, in May 2023, more than 1,000 technology leaders, including Altman himself, signed an open letter comparing AI risks to other existential threats like climate change and pandemics. 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,' the letter, released by the Center for AI Safety, a California nonprofit, says in its entirety. I'm at the end of my rope. For the past two years, I've tried to work with state legislators to develop regulatory frameworks for artificial intelligence in Hawaii. These efforts sought to create an Office of AI Safety and implement the precautionary principle in AI regulation, which means taking action before the actual harm materializes, because it may be too late if we wait. Unfortunately, despite collaboration with key senators and committee chairs, my state legislative efforts died early after being introduced. And in the meantime, the Trump administration has rolled back almost every aspect of federal AI regulation and has essentially put on ice the international treaty effort that began with the Bletchley Declaration in 2023. At no level of government are there any safeguards for the use of AI systems in Hawaii. Despite their previous statements, OpenAI has abandoned its key safety commitments, including walking back its ' superalignment ' initiative that promised to dedicate 20 percent of computational resources to safety research, and late last year, reversing its prohibition on military applications. Its critical safety researchers have left, including co-founder Ilya Sutskever and Jan Leike, who publicly stated in May 2024, 'Over the past years, safety culture and processes have taken a backseat to shiny products.' The company's governance structure was fundamentally altered during a November 2023 leadership crisis, as the reconstituted board removed important safety-focused oversight mechanisms. Most recently, in April, OpenAI eliminated guardrails against misinformation and disinformation, opening the door to releasing 'high risk' and 'critical risk' AI models, 'possibly helping to swing elections or create highly effective propaganda campaigns,' according to Fortune magazine. In its first response, OpenAI has argued that the case should be dismissed because regulating AI is fundamentally a 'political question' that should be addressed by Congress and the president. I, for one, am not comfortable leaving such important decisions to this president or this Congress—especially when they have done nothing to regulate AI to date. Hawaii faces distinct risks from unregulated AI deployment. Recent analyses indicate that a substantial portion of Hawaii's professional services jobs could face significant disruption within five to seven years as a consequence of AI. Our isolated geography and limited economic diversification make workforce adaptation particularly challenging. Our unique cultural knowledge, practices, and language risk misappropriation and misrepresentation by AI systems trained without appropriate permission or context. My federal lawsuit applies well-established legal principles to this novel technology and makes four key claims: Product liability claims: OpenAI's AI systems represent defectively designed products that fail to perform as safely as ordinary consumers would expect, particularly given the company's deliberate removal of safety measures it previously deemed essential. Failure to warn: OpenAI has failed to provide adequate warnings about the known risks of its AI systems, including their potential for generating harmful misinformation and exhibiting deceptive behaviors. Negligent design: OpenAI has breached its duty of care by prioritizing commercial interests over safety considerations, as evidenced by internal documents and public statements from former safety researchers. Public nuisance: OpenAI's deployment of increasingly capable AI systems without adequate safety measures creates an unreasonable interference with public rights in Hawaii. Federal courts have recognized the viability of such claims in addressing technological harms with broad societal impacts. Recent precedents from the Ninth Circuit Court of Appeals (which Hawaii is part of) establish that technology companies can be held liable for design defects that create foreseeable risks of harm. I'm not asking for a permanent ban on OpenAI or its products here in Hawaii but, rather, a pause until OpenAI implements the safety measures the company itself has said are needed, including reinstating its previous commitment to allocate 20 percent of resources to alignment and safety research; implementing the safety framework outlined in its own publication ' Planning for AGI and Beyond,' which attempts to create guardrails for dealing with AI as or more intelligent than its human creators; restoring meaningful oversight through governance reforms; creating specific safeguards against misuse for manipulation of democratic processes; and developing protocols to protect Hawaii's unique cultural and natural resources. These items simply require the company to adhere to safety standards it has publicly endorsed but has failed to consistently implement. While my lawsuit focuses on Hawaii, the implications extend far beyond our shores. The federal court system provides an appropriate venue for addressing these interstate commerce issues while protecting local interests. The development of increasingly capable AI systems is likely to be one of the most significant technological transformations in human history, many experts believe—perhaps in a league with fire, according to Google CEO Sundar Pichai. 'AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,' Pichai said in 2018. He's right, of course. The decisions we make today will profoundly shape the world our children and grandchildren inherit. I believe we have a moral and legal obligation to proceed with appropriate caution and to ensure that potentially transformative technologies are developed and deployed with adequate safety measures. What is happening now with OpenAI's breakneck AI development and deployment to the public is, to echo technologist Tristan Harris's succinct April 2025 summary, 'insane.' My lawsuit aims to restore just a little bit of sanity.

The '10x engineer' is old news. Surge's CEO says '100x engineers' are here.
The '10x engineer' is old news. Surge's CEO says '100x engineers' are here.

Business Insider

time29 minutes ago

  • Business Insider

The '10x engineer' is old news. Surge's CEO says '100x engineers' are here.

A "10x engineer" isn't cool anymore. You know what's cool? A "100x engineer." As the Silicon Valley saying goes, the "10x engineer" is capable of producing 10 times the work of their colleagues, developing projects and writing code at a quicker pace. In the age of AI, a top-end engineer's multiplier is itself getting a multiplier, according to Surge CEO Edwin Chen. Chen boot-strapped his way to $1 billion in revenue. The CEO of Surge self-funded his company, taking no VC money — though he's now reportedly looking to raise up to an additional $1 billion in capital. On the 20VC podcast, he said a "100x engineer" is now possible — and could help lean startups reach new heights. "Already you have a lot of these single-person startups that are already doing $10 million in revenue," Chen said. "If AI is adding all this efficiency, then yeah, I can definitely see this multiplying 100x to get to this $1 billion single-person company." Efficiency gains can be vital to startups looking to stay lean. Chen said that Surge was already "so much more efficient" than its peer companies like Scale AI, Surge's biggest data labeling rival, which reportedly brought in $870 million in 2024 after multiple rounds of funding. Chen also said that Surge's lack of a sales or PR team helped keep it lean. While the "10x engineer" dates back to a 1968 study about programming performance, the term was later popularized among Silicon Valley executives. In his book "Zero to One," Peter Thiel coined the "10x improvement" rule, claiming that startups needed to improve on existing alternatives by a factor of ten. Chen is a believer in the " 10x engineer." Some are 2-3x faster at coding, or work 2-3x harder, or have 2-3x less side tasks, he said. Multiplied together, engineers can reach 10x productivity. "2-3x is often actually an understatement," Chen said. "I know people who literally are five times more productive coders than anybody else." The advent of generative AI and coding tools supercharges Chen's math: "Add in all the AI efficiencies that you get. You just multiply all those things out and you get to 100," he said. Agentic AI coding tools have taken over much of software engineering, writing code for developers, sometime with minimal human editing necessary. But these tools still need a prompt, which Chen said makes them most useful to those who have high-level ideas. "It often just removes a lot of the drudgery of your day-to-day work," Chen said. "I do think it disproportionately favors people who are already the '10x engineers.'"

Apple's Tim Cook is under pressure—but there are a few key reasons leadership experts think he's still the guy for the job
Apple's Tim Cook is under pressure—but there are a few key reasons leadership experts think he's still the guy for the job

Yahoo

timean hour ago

  • Yahoo

Apple's Tim Cook is under pressure—but there are a few key reasons leadership experts think he's still the guy for the job

Is it time for Apple CEO Tim Cook to clean out his office? You might think so after the past few weeks. But top experts on CEO successions counsel settling down and looking at the big picture. It shows why Cook might remain CEO for years and why that might even be the best course for Apple. The recent depressing news included a top Apple AI executive's defection to Meta just weeks after another high-level AI researcher had left—especially painful because Apple is widely seen as a laggard in the world's hottest technology, AI. Last month Apple's annual Worldwide Developer Conference, often a scene of breathtaking new products or services, was 'a snoozer,' said Wedbush analyst Dan Ives. While stocks of Microsoft and Alphabet are hitting new highs, Apple is down 16% this year. Little wonder that a Wall Street research firm, Lightshed, concluded Cook is no longer the right boss for Apple. The company 'now needs a product-focused CEO, not one centered on logistics,' the firm wrote. 'AI is not something that Apple can merely 'pull the string' on. Missing on AI could fundamentally alter the company's long-term trajectory and ability to grow at all.' By earlier standards, Cook would have been on his way out in any case. In August he will have been CEO for 14 years, and in November he turns 65. But 65 is nothing special anymore, and no board of directors will hurry to dispatch a CEO who created far more shareholder value than his legendary predecessor, Steve Jobs, ever did. At least in theory, the options for Cook and the board are wide open. So what should Cook and the board do? To get an authoritative answer, Fortune recruited three eminent executive search experts, each of whom has counseled scores of major boards on managing successions. We agreed to withhold their names so that they could speak completely candidly. Here are their combined thoughts—as well as the final word from preeminent business historian Richard Tedlow, who gives a compelling comeback to anyone who thinks Cook's time is up. Apple's competitive environment now. 'Two things are happening in parallel. One is AI, which is a much bigger deal than the internet was. The second thing is the evolution of the hardware. There's a 'good enough' problem. For most users, the phones are good enough. I'm a power phone user, and I use a several-year-old iPhone because there's no compelling reason to upgrade. Those two things in combination make for a more challenging environment for Apple.' At the same time, the consultants see a temporary upside for Apple. 'They don't have the existential threat that Google has from AI. Apple still has the platform—I'm still using my Apple phone to reference ChatGPT. They're not losing revenue in that exchange. So if you look at where they make money, they actually don't need a quick entry [into AI].' 'Remember, Apple has never been first to market with anything. They're considered to be the most innovative company in the world, but they have largely taken a concept that's been proven and made it applicable for use in ways that are highly innovative and esthetically appealing.' Still, the clock has been ticking for a while. 'I would be shocked if within the next 12 months they do not release a truly functioning baseline agent to replace Siri.' When might Cook be thinking of stepping down? 'That's really the foundational question. If it's two years, are there any outsiders who could plausibly come in? Are there any boomerang people who could come back from outside of Apple?' 'Apple is less likely to go outside because of the cultural history of outsiders at Apple. It's almost revered, the story of how outsiders almost killed Apple [before Steve Jobs returned in 1997]. We hear pretty consistently that Cook is thinking of an age 68 to 70 timeline [which would be three to five years from now.] He feels that, with AI, there's some unfinished business.' 'I don't think Tim will be CEO until he's 70. I think he's tired, honestly. It's been an exhausting journey, and he's amazing, but I do sense a different energy.' What kind of executive will Cook's successor be? 'The common wisdom is that they really need a product visionary, as opposed to the operational genius that he was. I would argue that until the tariff and supply chain issues get resolved, they probably do need him at the helm because that is a non-trivial issue for them.' Who are the leading candidates to succeed Cook? 'The most obvious are John [Ternus] and Craig [Federighi].' Ternus is senior vice president hardware engineering. Federighi is senior vice president software engineering. 'But given the timeline, [the company] could still make quite a few changes, and it could be somebody quite different.' 'There are a few companies where being a CEO is really like being the president of a country, and Apple is one of those. There are maybe a dozen. It sounds kind of heretical to say, but to some degree, the smaller part of the job is effectively operating the company.' Bottom line, what is the big-picture assessment of Tim Cook? For this we turn to a business historian, Richard Tedlow, an emeritus professor at the Harvard Business School. Like any good professor, he asks questions. He starts by asking five crucial questions about Apple: Does it satisfy customers? Does it come from behind? Does it have a powerful corporate culture? Is it willing to admit mistakes? Does it have 'an imagination of disaster,' a realization that things could go badly wrong? Approvingly, he answers Yes to all. Told that a Wall Street research firm has said Cook should resign, Tedlow notes that Warren Buffett invited Cook to Berkshire Hathaway's annual meeting in May and said, 'I'm somewhat embarrassed to say that Tim Cook has made Berkshire a lot more money than I've ever made [for] Berkshire Hathaway.' Tedlow asks, 'If that Wall Street firm called Buffett and said, 'Warren, do you think it's time to get rid of Tim Cook,' what do you think Warren would say?' Tedlow's ultimate query: 'If you could choose anybody to be the CEO of Apple right now—anybody in the whole history of business, from John Jacob Astor to John D. Rockefeller to Tom Watson Sr. to Andy Grove to Tim Cook—whom would you choose? This is actually not a difficult question.' This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store