&w=3840&q=100)
Meet Alexandr Wang, Meta $15 billion bet to catch up in global AI race
Meta is bringing on Alexandr Wang—the 27-year-old co-founder and CEO of Scale AI—to head its newly formed superintelligence lab. The deal, reportedly valued at up to $15 billion, marks one of Meta's boldest steps yet in its bid to regain a leadership position in the global race for artificial intelligence.
Wang, who founded Scale AI in 2016 after dropping out of MIT and joining Y Combinator, has built a data-labelling juggernaut that powers the AI systems of OpenAI, Microsoft, and Google's Waymo. Under his leadership, Scale AI expanded from helping autonomous vehicles with street-level data to enabling the training of large language models (LLMs)—the core of modern generative AI.
Alexandr Wang: Meta's $15 million bet
At 24, Wang became the world's youngest self-made billionaire. The son of Chinese physicists, he scaled Scale AI into one of the most influential enablers of AI development. His company's strengths in labelling, data infrastructure, and deployment now form the backbone of Meta's renewed AI ambitions.
According to The Verge, Meta has acquired a 49 per cent stake in Scale AI, valuing the firm at $29 billion. Much of this investment will be directed at fuelling Meta's push into next-generation AI, particularly around autonomy, decision-making, and reasoning systems.
From Scale to superintelligence
Meta's acquisition is more than just a financial investment—it signals a strategic reset. While the company has invested heavily in AI across WhatsApp, Instagram and smart glasses, CEO Mark Zuckerberg is now focusing on artificial superintelligence (ASI): machines with reasoning capacities beyond human cognition.
The new superintelligence lab, to be led by Wang and reporting directly to Zuckerberg, will assemble elite AI researchers reportedly offered industry-leading compensation. Wang's practical, data-first approach represents a generational pivot from Meta's long-time AI chief Yann LeCun, who remains at Meta but is increasingly sidelined due to his divergence from prevailing AI strategies.
Alexandr Wang's vision for the next leap in AI
Wang maintains a low profile compared to some of Silicon Valley's more outspoken figures, but his impact is widely recognised. His belief in the primacy of clean, scalable data aligns with a growing consensus that the next breakthroughs in AI will hinge more on training data than novel algorithms—an area where Scale AI excels.
For Meta, this move may be a watershed moment. The company has trailed OpenAI and Anthropic in model development and deployment. With Wang now helming its new lab, Meta is betting on a fresh chapter—one that could reassert its dominance in the increasingly competitive AI arena.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
29 minutes ago
- Economic Times
Apple Paper questions path to AGI, sparks division in GenAI group
New Delhi: A recent research paper from Apple focusing on the limitations of large reasoning models in artificial intelligence has left the generative AI community divided, sparking significant debate whether the current path taken by AI companies towards artificial general intelligence is the right one to take. The paper, titled The Illusion of Thinking, published earlier this week, demonstrates that even the most sophisticated large reasoning models do not genuinely think or reason in a human-like way. Instead, they excel at pattern recognition and mimicry, generating responses that only appear intelligent, but lack true comprehension or conceptual understanding. The study used controlled puzzle environments, such as the popular Tower of Hanoi puzzle, to systematically test reasoning abilities across varying complexities by large reasoning models such as OpenAI's o3 Mini, DeepSeek's R1, Anthropic's Claude 3.7 Sonnet and Google Gemini Flash. The findings show that while large reasoning and language models may handle simple or moderately complex tasks, they experience total failure when faced with high-complexity problems, which occur despite having sufficient computational resources. Gary Marcus, a cognitive scientist and a known sceptic of the claims surrounding large language models, views Apple's work as providing compelling empirical evidence that today's models primarily repeat patterns learned during training from vast datasets without genuine understanding or true reasoning capabilities. "If you can't use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI, current hype aside) solved with AI in 1957, and that first semester AI students solve routinely, the chances that models like Claude or o3 are going to reach AGI seem truly remote," Marcus wrote in his blog. Marcus' arguments are also echoed in earlier comments of Meta's chief AI scientist Yann LeCun, who has argued that current AI systems are mainly sophisticated pattern recognition tools rather than true thinkers. The release of Apple's paper ignited a polarised debate across the broader AI community, with many panning the design of the study than its findings.A published critique of the paper by researchers from Anthropic and San-Francisco based Open Philanthropy said the study has issues in the experimental design, that it overlooks output an alternate demonstration, the researchers tested the models on the same problems but allowed them to use code, resulting in high accuracy across all the tested models. The critique around the study's failure to take in the output limits and the limitations in coding by the models have also been highlighted by other AI commentators and researchers including Matthew Berman, a popular AI commentator and researcher."SOTA models failed The Tower of Hanoi puzzle at a complexity threshold of >8 discs when using natural language alone to solve it. However, ask it to write code to solve it, and it flawlessly does up to seemingly unlimited complexity," Berman wrote in a post on X (formerly Twitter).The study highlights Apple's more cautious approach to AI compared to rivals like Google and Samsung, who have aggressively integrated AI into their products. Apple's research explains its hesitancy to fully commit to AI, contrasting with the industry's prevailing narrative of rapid questioned the timing of the release of the study, coinciding with Apple's annual WWDC event where it announces its next software across online forums said the study was more about managing expectations in light of Apple's own struggles with said, practitioners and business users argue that the findings do not change the immediate utility of AI tools for everyday applications.


Time of India
41 minutes ago
- Time of India
Government to 'facilitate' auto companies procure Chinese magnet
This is an AI-generated image, used for representational purposes only. NEW DELHI: The government has stepped in to support the auto and component industry's efforts to procure rare earth magnets from Chin,a with companies submitting End User Certificate (EUC) to Beijing, certifying no diversion of the shipment towards defence or weapons production. The commerce department and the ministry of external affairs have been engaged in the process of seeking time for an interface of the Indian industry with representatives of the Chinese government. "We are mindful of the concerns of the industry, especially as manufacturing schedules are likely to be disrupted if the supplies do not resume in some time," one of the sources said. A senior government official said that commerce secretary Sunil Barthwal recently held consultations with auto sector representatives and over the next few weeks, a delegation of executives from industry bodies is expected to go to China, with the Indian embassy in Beijing facilitating the exercise. The government, however, wants to play the role of a "facilitator" in the process, clearly demarcating it from being seen as a G2G (government-to-government) negotiation. "The idea is to involve the respective stakeholders and ministries for a concerted effort to expedite the approval process. The industry has already submitted the requisite documents related to the matter." Local industry has argued it is working on developing capabilities, but it is seen to be a massive challenge. "Developing indigenous supplies and capabilities is crucial, but it's a complex process requiring large-scale investments. Also, mitigating challenges of radioactive materials during the extraction process is also a critical factor," said Alok Perti, former coal secretary and now a senior board advisor with B2B mining and metal industry body MMPI. "Providing viability gap funding like PLI schemes by the government, encouragement to undertake research with countries like Russia, Australia, is also relevant." Source said that auto industry bodies, the Society of Indian Automobile Manufacturers (Siam), that represents product companies as well as the Automotive Components Manufacturers Association (ACMA), have been carrying out direct interface with government on the matter, even while companies have submitted their petitions through the route mandated by the Chinese government. The applications have reached China, and companies are eagerly awaiting movement, officials of at least four large companies told TOI. The industry is hopeful of a "positive result" following the meetings that Chinese vice foreign minister Sun Weidong, who is on a two-day visit to India, has had with his Indian counterparts. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
41 minutes ago
- Time of India
'Neuralink babies'? Scale AI's Alexandr Wang says he is waiting for Elon Musk's brain chips before having kids
In a statement that straddles science fiction and near-future reality, Scale AI founder Alexandr Wang has revealed he's putting off parenthood—for now. But not because of career demands or personal timing. His reason? He's waiting for Elon Musk 's Neuralink to become mainstream. Yes, Wang wants his future children to be among the first humans enhanced by brain-computer interfaces from birth. During a recent appearance on The Shawn Ryan Show , the 28-year-old tech prodigy shared a vision that feels pulled from the pages of a futuristic novel. 'When we get Neuralink and we get these other technologies, kids who are born with them are gonna learn how to use them in like crazy, crazy ways,' Wang said, explaining that the first seven years of life—when neuroplasticity is at its peak—present the most fertile ground for integrating superintelligence into the human experience. Neuralink, Meet Nature Neuralink, founded by Elon Musk, is currently trialing a brain-chip implant the size of a coin. Though still in early clinical stages, the device has already shown stunning potential: one patient with ALS reportedly edited a video using only his mind. But Neuralink isn't alone. Synchron, backed by heavyweights like Jeff Bezos and Bill Gates, is collaborating with Apple to help patients with disabilities use iPhones through brain signals. Motif Neurotech, another contender, is developing a neurostimulator that treats severe depression and functions like a pacemaker for the brain. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy Brass Idols - Handmade Brass Statues for Home & Gifting Luxeartisanship Buy Now Undo Wang, who is also taking on a new role at Meta to lead its superintelligence initiatives, seems to believe these brain-machine hybrids are not just medical miracles—they are the future of human learning, cognition, and possibly even evolution. Born to Compute? His vision hinges on a well-documented trait: the astonishing neuroplasticity of young brains. A 2009 study published in Brain Dev. found that children's brains, particularly in the early years, are primed for adaptation. This plasticity not only helps kids learn languages or recover from injury but, in Wang's vision, could also help them learn how to "think" alongside or even through artificial intelligence. It's a radical idea—one that flips the conventional approach to parenting. Instead of shielding children from screen time or tech overload, Wang imagines a future where babies are born wired for the digital age, quite literally. Ethics, Science, and the Silicon Valley Dream As startling as Wang's perspective may seem, it's emblematic of a growing mindset in tech circles: that human limitations are solvable problems. But while Wang may be planning for AI-enhanced progeny, ethical concerns continue to hover over Neuralink and its competitors—from long-term brain health to consent, privacy, and the ever-blurring boundary between human and machine. Still, in a world racing toward a post-human horizon, Alexandr Wang's statement isn't just provocative—it might be prophetic. The question isn't whether Neuralink babies will happen. It's who dares to go first. And Wang, it seems, is ready to raise the world's first AI-native child—as soon as the software is ready.