logo
Scale AI CEO Stresses Startup's Independence After Meta Deal

Scale AI CEO Stresses Startup's Independence After Meta Deal

Mint13 hours ago
(Bloomberg) -- Scale AI's new leader said the data-labeling startup remains independent from Meta Platforms Inc. despite the social media giant taking a 49% stake just weeks ago, and is focused on expanding its business.
Interim Chief Executive Officer Jason Droege said Meta, a customer since 2019, won't receive special treatment even after its $14.3 billion investment.
'There's no preferential access that they have to anything,' Droege said Tuesday in an interview, one of his first since taking the interim CEO role in mid-June. 'They are a customer, and we will support them like we do our other customers, that's the extent of the relationship.'
Scale's 28-year-old former CEO and co-founder Alexandr Wang left the startup to lead a new superintelligence unit at Meta, part of the Facebook parent company's multibillion-dollar investment to catch up on AI development. Less than a dozen of Scale's roughly 1,500 employees left to join Wang at Meta, Droege said.
Wang will continue to hold a seat on the board, but Meta won't receive any other board representation, Droege said, adding that Scale's customer data privacy rules and governance remains the same. The board doesn't have access to internal customer-specific data, he added.
'We have elaborate procedures to ensure the privacy and security of our customers — their IP, their data — and that it doesn't make its way across our customer base,' Droege said.
Droege, who was promoted from his previous role as chief strategy officer, is a seasoned Silicon Valley tech executive. Prior to joining Scale, he was a partner at venture capital firm Benchmark, and before that was a vice president at Uber Technologies Inc., where he launched the company's Uber Eats product.
Now, he has the job of evolving Scale AI's business in an increasingly crowded corner of the AI market.
For years, Scale has been the best-known name in the market for helping tech firms label and annotate the data needed to build AI models; it generated about $870 million in revenue in 2024 and expects $2 billion in revenue this year, Bloomberg News reported in April.
Yet a growing number of companies, including Turing, Invisible Technologies, Labelbox and Uber, now offer various services to meet AI developers' bottomless need for data. And it's likely to only get trickier, as Scale AI rivals are now seeing a surge in interest from customers, some of whom may be worried about Meta getting added visibility into their AI development process.
And in light of the Meta investment and partnership with Scale, some of those customers are cutting ties with the company, including OpenAI and Google, as Bloomberg and others have reported.
While data labeling remains a large part of Scale's business, Droege said the startup is also expanding its application business that provides services on top of other AI foundation models. That app business is currently making nine figures in revenue, Droege said, without giving a specific number, and includes Fortune 500 companies in health care, education and telecommunications. Scale also counts the US government as one of its customers.
The CEO added that Scale will continue to work with many different kinds of AI models rather than focusing on Meta's Llama models exclusively.
As Meta battles other AI companies like OpenAI, Google and Anthropic for top talent, Droege said he's communicating to his employees that Scale is a business undergoing a significant change, and there's still an 'enormous opportunity' ahead as the AI industry continues to grow. He also pointed out Scale's ability to adapt, as over time the company has taken on different kinds of data-related work — from autonomous vehicles to generative AI — and worked with enterprise and government customers.
'This is an extremely agile company,' he said.
More stories like this are available on bloomberg.com
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sovereign Bold Bonds earn over 200% returns: RBI announces redemption price for some SGBs
Sovereign Bold Bonds earn over 200% returns: RBI announces redemption price for some SGBs

Time of India

time29 minutes ago

  • Time of India

Sovereign Bold Bonds earn over 200% returns: RBI announces redemption price for some SGBs

AI image Sovereign Bold Bond (SGB) redemption: Bought sovereign gold bonds in 2017 or 2018? You are in for good news as the RBI announced the redemption prices for the SGBs, offering a return of over 200%. The Reserve Bank of India (RBI) announced the redemption price for two sovereign gold bond tranches due for premature redemption on July 1. Investors holding the SGB 2017-18 Series-XIV and SGB 2018-19 Series-IV will be getting Rs 9,628 per unit of SGB. Even though the tenure for these gold bonds is 8 years, they allow for premature redemption after the fifth year from the date of issue. Calculating the SGB redemption According to the RBI press release issued on June 30, the redemption price is calculated as the simple average of closing gold price of 999 purity for the previous three business days from the redemption date, as published by the India Bullion and Jewellers Association Ltd (IBJA). For this cycle, the relevant dates were June 26, 27 and 30, ET reported. SGB 2017-18 Series-XIV Those who invested in the SGB 2017-18 Series XIV in January 2018 at Rs 2,831 per gram are getting almost 240% return. SGB 2018-19 Series-IV The SGB 2018-19 Series IV, issued a year later in January 2019 at Rs 3,119 per gram, has yielded an increase of over 208%. This implies that investors would be getting Rs 6,509, in case they chose to opt to withdraw prematurely. This rate does not account for the fixed 2.5% interest offered annually. Interest offered The bonds carry an interest rate of 2.5% per annum, paid semi-annually. This is credited directly to the investor's bank account and is in addition to the capital appreciation. The final interest payout is made along with the principal on maturity or redemption. How to redeem SGBAs prematurely Identifying the bond tranche by checking the issue date to determine when they become eligible for premature redemption. They should then submit the redemption request before the deadline specified in the schedule to ensure timely processing. Investors will be notified one month in advance about the upcoming maturity of their SGBs. On the maturity date, the proceeds will be directly credited to the bank account registered with the issuing authority. In case there are any changes in personal details such as bank account number or email ID, it is important to promptly update this information with the bank, SHCIL, or post office to avoid any details. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Read Mark Zuckerberg's full memo to employees on Meta Superintelligence Labs: We are going to …
Read Mark Zuckerberg's full memo to employees on Meta Superintelligence Labs: We are going to …

Time of India

time37 minutes ago

  • Time of India

Read Mark Zuckerberg's full memo to employees on Meta Superintelligence Labs: We are going to …

Facebook founder Mark Zuckerberg has officially announced the formation of Meta Superintelligence Labs . The new division aims to develop 'personal superintelligence for everyone' and will be led by former Scale AI CEO Alexandr Wang as its Chief AI Officer. This move follows Meta's recent $14.3 billion acquisition of Wang's data-labeling startup. Wang will co-lead MSL alongside former GitHub CEO Nat Friedman, who will focus on AI products and applied research. In a memo sent to employees, Zuckerberg also introduced the full team of 11 members who the company has hired from competitors like Google, OpenAI and Anthropic. Read Meta CEO's full memo to his employees: As the pace of AI progress accelerates, developing superintelligence is coming into sight. I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for Meta to lead the way. Today I want to share some details about how we're organizing our AI efforts to build towards our vision: personal superintelligence for everyone. We're going to call our overall organization Meta Superintelligence Labs (MSL). This includes all of our foundations, product, and FAIR teams, as well as a new lab focused on developing the next generation of our models. Alexandr Wang has joined Meta to serve as our Chief AI Officer and lead MSL. Alex and I have worked together for several years, and I consider him to be the most impressive founder of his generation. He has a clear sense of the historic importance of superintelligence, and as co-founder and CEO he built ScaleAI into a fast-growing company involved in the development of almost all leading models across the industry. Nat Friedman has also joined Meta to partner with Alex to lead MSL, heading our work on AI products and applied research. Nat will work with Connor to define his role going forward. He ran GitHub at Microsoft, and most recently has run one of the leading AI investment firms. Nat has served on our Meta Advisory Group for the last year, so he already has a good sense of our roadmap and what we need to do. We also have several strong new team members joining today or who have joined in the past few weeks that I'm excited to share as well: Trapit Bansal -- pioneered RL on chain of thought and co-creator of o-series models at OpenAI. Shuchao Bi -- co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAI. Huiwen Chang -- co-creator of GPT-4o's image generation, and previously invented MaskGIT and Muse text-to-image architectures at Google Research Ji Lin -- helped build o3/o4-mini, GPT-4o, GPT-4 .1, GPT-4.5, 4o-imagegen, and Operator reasoning stack. Joel Pobar -- inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning. Jack Rae -- pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind . Hongyu Ren -- co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAI. Johan Schalkwyk -- former Google Fellow, early contributor to Sesame, and technical lead for Maya. Pei Sun -- post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models. Jiahui Yu -- co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAI, and co-led multimodal at Gemini. Shengjia Zhao -- co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAI. I'm excited about the progress we have planned for Llama 4.1 and 4.2. These models power Meta AI, which is used by more than 1 billion monthly actives across our apps and an increasing number of agents across Meta that help improve our products and technology. We're committed to continuing to build out these models. In parallel, we're going to start research on our next generation of models to get to the frontier in the next year or so. I've spent the past few months meeting top folks across Meta, other AI labs, and promising startups to put together the founding group for this small talent-dense effort. We're still forming this group and we'll ask several people across the AI org to join this lab as well. Meta is uniquely positioned to deliver superintelligence to the world. We have a strong business that supports building out significantly more compute than smaller labs. We have deeper experience building and growing products that reach billions of people. We are pioneering and leading the AI glasses and wearables category that is growing very quickly. And our company structure allows us to move with vastly greater conviction and boldness. I'm optimistic that this new influx of talent and parallel approach to model development will set us up to deliver on the promise of personal superintelligence for everyone. We have even more great people at all levels joining this effort in the coming weeks, so stay tuned. I'm excited to dive in and get to work. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI
Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI

Time of India

time38 minutes ago

  • Time of India

Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI

Mark Zuckerberg has rarely been accused of thinking small. After attempting to redefine the internet through the metaverse, he's now set his sights on a more ambitious frontier: superintelligence—the idea that machines can one day match, or even surpass, the general intelligence of humans. To that end, Meta has created an elite unit with a name that sounds like it belongs in a sci-fi script: Meta Superintelligence Lab (MSL). But this isn't fiction. It's a real-world, founder-led moonshot, powered by aggressive hiring, audacious capital, and a cast of technologists who've quietly shaped today's AI landscape. This is not just a story of algorithms and GPUs. It's about power, persuasion, and the elite brains Zuckerberg believes will push Meta into the next epoch of intelligence. The architects: Who's running Meta's AGI Ambitions? Zuckerberg has never been one to let bureaucracy slow him down. So he didn't delegate the hiring for MSL—he did it himself. The three minds now driving this initiative are not traditional corporate executives. They are product-obsessed builders, technologists who operate with startup urgency and almost missionary belief in Artificial general intelligence (AGI). Name Role at MSL Past Lives Education Alexandr Wang Chief AI Officer, Head of MSL Founder, Scale AI MIT dropout (Computer Science) Nat Friedman Co-lead, Product & Applied AI CEO, GitHub; Microsoft executive B.S. Computer Science & Math, MIT Daniel Gross (Joining soon, role TBD) Co-founder, Safe Superintelligence; ex-Apple, YC No degree; accepted into Y Combinator at 18 Wang, once dubbed the world's youngest self-made billionaire, is a data infrastructure prodigy who understands what it takes to feed modern AI. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like My baby is in so much pain, please help us? Donate For Health Donate Now Undo Friedman, a revered figure in the open-source community, knows how to productise deep tech. And Gross, who reportedly shares Zuckerberg's intensity, brings a perspective grounded in AI alignment and risk. Together, they form a high-agency, no-nonsense leadership core—Zuckerberg's version of a Manhattan Project trio. The Scientists: 11 defections that shook the AI world If leadership provides the vision, the next 11 are the ones expected to engineer it. In a hiring spree that rattled OpenAI, DeepMind, and Anthropic, Meta recruited some of the world's most sought-after researchers—those who helped build GPT-4, Gemini, and several of the most important multimodal models of the decade. Name Recruited From Expertise Education Jack Rae DeepMind LLMs, long-term memory in AI CMU, UCL Pei Sun DeepMind Structured reasoning (Gemini project) Tsinghua, CMU Trapit Bansal OpenAI Chain-of-thought prompting, model alignment IIT Kanpur, UMass Amherst Shengjia Zhao OpenAI Alignment, co-creator of ChatGPT, GPT-4 Tsinghua, Stanford Ji Lin OpenAI Model optimization, GPT-4 scaling Tsinghua, MIT Shuchao Bi OpenAI Speech-text integration Zhejiang, UC Berkeley Jiahui Yu OpenAI/Google Gemini vision, GPT-4 multimodal USTC, UIUC Hongyu Ren OpenAI Robustness and safety in LLMs Peking Univ., Stanford Huiwen Chang Google Muse, MaskIT – next-gen image generation Tsinghua, Princeton Johan Schalkwyk Sesame AI/Google Voice AI, led Google's voice search efforts Univ. of Pretoria Joel Pobar Anthropic/Meta Infrastructure, PyTorch optimization QUT (Australia) This roster isn't just impressive on paper—it's a coup. Several were responsible for core components of GPT-4's reasoning, efficiency, and voice capabilities. Others led image generation innovations like Muse or built memory modules crucial for scaling up AI's attention spans. Meta's hires reflect a global brain gain: most completed their undergrad education in China or India, and pursued PhDs in the US or UK. It's a clear signal to students—brilliance isn't constrained by geography. What Meta offered: Money, mission, and total autonomy Convincing this calibre of talent to switch sides wasn't easy. Meta offered more than mission—it offered unprecedented compensation. • Some were offered up to $300 million over four years. • Sign-on bonuses of $50–100 million were on the table for top OpenAI researchers. • The first year's payout alone reportedly crossed $100 million for certain hires. This level of compensation places them above most Fortune 500 CEOs—not for running a company, but for building the future. It's also part of a broader message: Zuckerberg is willing to spend aggressively to win this race. OpenAI's Sam Altman called it "distasteful." Others at Anthropic and DeepMind described the talent raid as 'alarming.' Meta, meanwhile, has made no apologies. In the words of one insider: 'This is the team that gets to skip the red tape. They sit near Mark. They move faster than anyone else at Meta.' The AGI problem: Bigger than just scaling up But even with all the talent and capital in the world, AGI remains the toughest problem in computer science. The goal isn't to make better chatbots or faster image generators. It's to build machines that can reason, plan, and learn like humans. Why is that so hard? • Generalisation: Today's models excel at pattern recognition, not abstract reasoning. They still lack true understanding. • Lack of theory: There is no grand unified theory of intelligence. Researchers are working without a blueprint. • Massive compute: AGI may require an order of magnitude more compute than even GPT-4 or Gemini. • Safety and alignment: Powerful models can behave in unexpected, even dangerous ways. Getting them to want what humans want remains an unsolved puzzle. To solve these, Meta isn't just scaling up—it's betting on new architectures, new training methods, and new safety frameworks. It's also why several of its new hires have deep expertise in AI alignment and multimodal reasoning. What this means for students aiming their future in AI This story isn't just about Meta. It's about the direction AI is heading—and what it takes to get to the frontier. If you're a student in India wondering how to break into this world, take notes: • Strong math and computer science foundations matter. Most researchers began with robust undergrad training before diving into AI. • Multimodality, alignment, and efficiency are key emerging areas. Learn to work across language, vision, and reasoning. • Internships, open-source contributions, and research papers still open doors faster than flashy resumes. • And above all, remember: AI is as much about values as it is about logic. The future won't just be built by engineers—it'll be shaped by ethicists, philosophers, and policy thinkers too. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store