
Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI
has rarely been accused of thinking small. After attempting to redefine the internet through the metaverse, he's now set his sights on a more ambitious frontier: superintelligence—the idea that machines can one day match, or even surpass, the general intelligence of humans.
To that end, Meta has created an elite unit with a name that sounds like it belongs in a sci-fi script: Meta Superintelligence Lab (MSL). But this isn't fiction. It's a real-world, founder-led moonshot, powered by aggressive hiring, audacious capital, and a cast of technologists who've quietly shaped today's AI landscape.
This is not just a story of algorithms and GPUs. It's about power, persuasion, and the elite brains Zuckerberg believes will push Meta into the next epoch of intelligence.
The architects: Who's running Meta's
AGI
Ambitions?
Zuckerberg has never been one to let bureaucracy slow him down. So he didn't delegate the hiring for MSL—he did it himself. The three minds now driving this initiative are not traditional corporate executives. They are product-obsessed builders, technologists who operate with startup urgency and almost missionary belief in Artificial general intelligence (AGI).
Name
Role at MSL
Past Lives
Education
Alexandr Wang
Chief AI Officer, Head of MSL
Founder, Scale AI
MIT dropout (Computer Science)
Nat Friedman
Co-lead, Product & Applied AI
CEO, GitHub; Microsoft executive
B.S. Computer Science & Math, MIT
Daniel Gross
(Joining soon, role TBD)
Co-founder, Safe Superintelligence; ex-Apple, YC
No degree; accepted into Y Combinator at 18
Wang, once dubbed the world's youngest self-made billionaire, is a data infrastructure prodigy who understands what it takes to feed modern AI.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
My baby is in so much pain, please help us?
Donate For Health
Donate Now
Undo
Friedman, a revered figure in the open-source community, knows how to productise deep tech. And Gross, who reportedly shares Zuckerberg's intensity, brings a perspective grounded in AI alignment and risk.
Together, they form a high-agency, no-nonsense leadership core—Zuckerberg's version of a Manhattan Project trio.
The Scientists: 11 defections that shook the AI world
If leadership provides the vision, the next 11 are the ones expected to engineer it. In a hiring spree that rattled OpenAI, DeepMind, and Anthropic, Meta recruited some of the world's most sought-after researchers—those who helped build GPT-4, Gemini, and several of the most important multimodal models of the decade.
Name
Recruited From
Expertise
Education
Jack Rae
DeepMind
LLMs, long-term memory in AI
CMU, UCL
Pei Sun
DeepMind
Structured reasoning (Gemini project)
Tsinghua, CMU
Trapit Bansal
OpenAI
Chain-of-thought prompting, model alignment
IIT Kanpur, UMass Amherst
Shengjia Zhao
OpenAI
Alignment, co-creator of ChatGPT, GPT-4
Tsinghua, Stanford
Ji Lin
OpenAI
Model optimization, GPT-4 scaling
Tsinghua, MIT
Shuchao Bi
OpenAI
Speech-text integration
Zhejiang, UC Berkeley
Jiahui Yu
OpenAI/Google
Gemini vision, GPT-4 multimodal
USTC, UIUC
Hongyu Ren
OpenAI
Robustness and safety in LLMs
Peking Univ., Stanford
Huiwen Chang
Muse, MaskIT – next-gen image generation
Tsinghua, Princeton
Johan Schalkwyk
Sesame AI/Google
Voice AI, led Google's voice search efforts
Univ. of Pretoria
Joel Pobar
Anthropic/Meta
Infrastructure, PyTorch optimization
QUT (Australia)
This roster isn't just impressive on paper—it's a coup. Several were responsible for core components of GPT-4's reasoning, efficiency, and voice capabilities. Others led image generation innovations like Muse or built memory modules crucial for scaling up AI's attention spans.
Meta's hires reflect a global brain gain: most completed their undergrad education in China or India, and pursued PhDs in the US or UK. It's a clear signal to students—brilliance isn't constrained by geography.
What Meta offered: Money, mission, and total autonomy
Convincing this calibre of talent to switch sides wasn't easy. Meta offered more than mission—it offered unprecedented compensation.
• Some were offered up to $300 million over four years.
• Sign-on bonuses of $50–100 million were on the table for top OpenAI researchers.
• The first year's payout alone reportedly crossed $100 million for certain hires.
This level of compensation places them above most Fortune 500 CEOs—not for running a company, but for building the future.
It's also part of a broader message: Zuckerberg is willing to spend aggressively to win this race.
OpenAI's Sam Altman called it "distasteful." Others at Anthropic and DeepMind described the talent raid as 'alarming.' Meta, meanwhile, has made no apologies. In the words of one insider: 'This is the team that gets to skip the red tape. They sit near Mark. They move faster than anyone else at Meta.'
The AGI problem: Bigger than just scaling up
But even with all the talent and capital in the world, AGI remains the toughest problem in computer science.
The goal isn't to make better chatbots or faster image generators. It's to build machines that can reason, plan, and learn like humans.
Why is that so hard?
• Generalisation: Today's models excel at pattern recognition, not abstract reasoning. They still lack true understanding.
• Lack of theory: There is no grand unified theory of intelligence. Researchers are working without a blueprint.
• Massive compute: AGI may require an order of magnitude more compute than even GPT-4 or Gemini.
• Safety and alignment: Powerful models can behave in unexpected, even dangerous ways. Getting them to want what humans want remains an unsolved puzzle.
To solve these, Meta isn't just scaling up—it's betting on new architectures, new training methods, and new safety frameworks. It's also why several of its new hires have deep expertise in AI alignment and multimodal reasoning.
What this means for students aiming their future in AI
This story isn't just about Meta. It's about the direction AI is heading—and what it takes to get to the frontier.
If you're a student in India wondering how to break into this world, take notes:
• Strong math and computer science foundations matter. Most researchers began with robust undergrad training before diving into AI.
• Multimodality, alignment, and efficiency are key emerging areas. Learn to work across language, vision, and reasoning.
• Internships, open-source contributions, and research papers still open doors faster than flashy resumes.
• And above all, remember: AI is as much about values as it is about logic. The future won't just be built by engineers—it'll be shaped by ethicists, philosophers, and policy thinkers too.
Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
27 minutes ago
- Indian Express
Explained: AI & copyright law
In two key copyright cases last week, US courts ruled in favour of tech companies developing artificial intelligence (AI) models. While the two judgments arrived at their conclusions differently, they are the first to address a central question around generative AI models: are these built on stolen creative work? At a very basic level, AI models such as ChatGPT and Gemini identify patterns from massive amounts of data. Their ability to generate passages, scenes, videos, and songs in response to prompts depends on the quality of the data they have been trained on. This training data has thus far come from a wide range of sources, from books and articles to images and sounds, and other material available on the Internet. There are at the moment at least 21 ongoing lawsuits in the US, filed by writers, music labels, and news agencies, among others, against tech companies for training AI models on copyrighted work. This, the petitioners have argued, amounts to 'theft'. In their defence, tech companies say they are using the data to create 'transformative' AI models, which falls within the ambit of 'fair use' — a concept in law that permits use of copyrighted material in limited capacities for larger public interests (for instance, quoting a paragraph from a book for a review). Here's what happened in the two cases, and why the judgments matter. In August 2024, journalist-writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a class action complaint — a case that represents a large group that could be/were similarly harmed — against Anthropic, the company behind the Claude family of Large Language Models (LLMs). The petitioners argued that Anthropic downloaded pirated versions of their works, made copies of them, and 'fed these pirated copies into its models'. They said that Anthropic has 'not compensated the authors', and 'compromised their ability to make a living as the LLMs allow anyone to generate — automatically and freely (or very cheaply) — texts that writers would otherwise be paid to create and sell'. Anthropic downloaded and used Books3 — an online shadow library of pirated books with about seven million copies — to train its models. That said, it also spent millions of dollars to purchase millions of printed books and scanned them digitally to create a general 'research library' or 'generalised data area'. Judge William Alsup of the District Court in the Northern District of California ruled on June 23 that Anthropic's use of copyrighted data was 'fair use', centering his arguments around the 'transformative' potential of AI. Alsup wrote: 'Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.' Thirteen published authors, including comedian Sarah Silverman and Ta-Nehisi Coates of Black Panther fame, filed a class action suit against Meta, arguing they were 'entitled to statutory damages, actual damages, restitution of profits, and other remedies provided by law'. The thrust of their reasoning was similar to what the petitioners in the Anthropic case had argued: Meta's Llama LLMs 'copied' massive amounts of text, with its responses only being derived from the training dataset comprising the authors' work. Meta too trained its models on data from Books3, as well as on two other shadow libraries — Anna's Archive and Libgen. However, Meta argued in court that it had 'post-trained' its models to prevent them from 'memorising' and 'outputting certain text from their training data, including copyrighted material'. Calling these efforts 'mitigations', Meta said it 'could get no model to generate more than 50 words and punctuation marks…' from the books of the authors that had sued it. In a ruling given on June 25, Judge Vince Chhabria of the Northern District of California noted that the plaintiffs were unable to prove that Llama's works diluted their markets. Explaining market dilution in this context, he cited the example of biographies. If an LLM were to use copyrighted biographies to train itself, it could, in theory, generate an endless number of biographies which would severely harm the market for biographies. But this does not seem to be the case thus far. However, while Chabbria agreed with Alsup that AI is groundbreaking technology, he also said that tech companies who have minted billions of dollars because of the AI boom should figure out a way to compensate copyright holders. Significance of rulings These judgments are a win for Anthropic and Meta. That said, both companies are not entirely scot-free: they still face questions regarding the legality of downloading content from pirated databases. Anthropic also faces another suit from music publishers who say Claude was trained on their copyrighted lyrics. And there are many more such cases in the pipeline. Twelve separate copyright lawsuits filed by authors, newspapers, and other publishers — including one high-profile lawsuit filed by The New York Times — against OpenAI and Microsoft are now clubbed into a single case. OpenAI is also being separately sued by publishing giant Ziff Davis. A group of visual artists are suing image generating tools Stability AI, Runway AI, Deviant Art, and Midjourney for training their tools on their work. Stability AI is also being sued by Getty Images for violating its copyright by taking more than 12 million of its photographs. In 2024, news agency ANI filed a case against OpenAI for unlawfully using Indian copyrighted material to train its AI models. The Digital News Publishers Association (DNPA), along with some of its members, which include The Indian Express, Hindustan Times, and NDTV, later joined the proceedings. Going forward, this is likely to be a major issue in India too. Thus, while significant, the judgments last week do not settle questions surrounding AI and copyright — far from it. And as AI models keep getting better, and spit out more and more content, there is also the larger question at hand: where does AI leave creators, their livelihoods, and more importantly, creativity itself?


Time of India
an hour ago
- Time of India
Taurus Daily Health Horoscope Today, July 03, 2025: Don't rush wisdom—it comes as you're ready
Today, the stars advise you to honour your pace. You may feel like you need all the answers, but true wisdom unfolds gently. Trust that what you need to know will come when your heart is calm. Don't chase clarity—allow it to rise naturally. You are learning deeply, even in silence. Be patient with your own becoming. Taurus Health Horoscope Today Health-wise, your body responds best to slow and steady care. Avoid rushing through meals or skipping rest. Digestive health may need attention—warm foods and mindful eating will help. Gentle walks or light movement will refresh your system. Take breaks between tasks. Even a short pause can bring balance. Your body carries ancient wisdom. Listen, and it will guide you toward healing. Taurus Wellness Horoscope Today Your emotional wellness today depends on accepting that not all answers need to come today. If you're feeling confused or emotional, allow yourself to feel without forcing understanding. Sit with yourself softly. Write your thoughts or take a silent moment. Your heart is maturing quietly. Peace grows not from knowing everything but from trusting the journey as it unfolds. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo Taurus Love Horoscope Today In love, slow understanding is better than fast assumptions. If you're in a relationship, take time to listen without reacting. Let the bond evolve. If you're single, be open to learning about yourself before seeking someone else. Wisdom in love comes from patience, not rushing. The connection you're looking for will come when your heart is fully ready to receive. Taurus Career Horoscope Today In your career, progress may feel slow, but it is steady. You are building something long-lasting. Trust that your efforts will be seen when the time is right. Don't compare yourself to others. Stay focused and grounded in your task. You are growing in knowledge and skill—even if it doesn't show loudly yet. Keep nurturing your path with quiet strength. Taurus Money Horoscope Today Financially, today is good for wise planning. Don't rush into big decisions. Slow evaluation of where your money is going will help you make better choices. Something you've been thinking about might finally make sense today. Invest where you feel long-term security. Financial wisdom comes when you stop rushing and start aligning with what feels stable. Taurus Affirmation Today: My wisdom grows in peace, not pressure. Discover everything about astrology at the Times of India , including daily horoscopes for Aries , Taurus , Gemini , Cancer , Leo , Virgo , Libra , Scorpio , Sagittarius , Capricorn , Aquarius , and Pisces .


Time of India
an hour ago
- Time of India
Axiom-4 one small step in orbit, giant leap for spaceflight & discovery: Isro
BENGALURU: Several days after India's Group Captain Shubhanshu Shukla entered the International Space Station (ISS), , stating that 'space is a domain best explored together,' said its participation in the Axiom-04 mission echoes that spirit. Tired of too many ads? go ad free now Axiom-4 follows in the legacy of international cooperation that sent Rakesh Sharma to space in 1984 aboard the Soviet Soyuz, Isro said, and added: '...This is one small step in orbit, but a giant leap in India's pursuit of human spaceflight and scientific discovery.' The space agency said: '...The professionalism, dedication, and scientific excellence demonstrated by all partner agencies in the lead-up to the Axiom-4 mission have been truly inspiring. Isro is deeply appreciative of the partnership and camaraderie shown by our counterparts around the world.' Reiterating that the mission was conceptualised during PM Narendra Modi's visit to the US in 2023, Isro said the PM's leadership continues to shape India's future in space as one that is collaborative, confident, and committed to peaceful exploration. Ax-4 is expected to provide useful operational inputs for Isro's , particularly in areas such as astronaut health telemetry, crew-ground coordination, multi-agency integration, and experiment execution in space conditions. 'These insights will directly influence mission planning, safety validation, and astronaut readiness for India's first indigenous human spaceflight mission,' Isro said. Onboard the ISS, a set of Indian scientific experiments is being conducted as part of the Ax-04 mission. These cover life sciences, fluid dynamics, health monitoring, and Earth observation. 'Redwire Space, US, is coordinating the payload integration activities. Tired of too many ads? go ad free now Redwire facilitated key steps, including technical validation and compliance with ISS payload requirements. Each experiment is packaged into flight-ready payload containers. Redwire is also supporting the development of hardware handling protocols, ensuring that the Indian experiments could be safely deployed and operated onboard ISS, thereby enabling meaningful scientific outcomes for India's research community,' Isro said. Pointing out how the launch was rescheduled multiple times due to issues such as a harness problem in Dragon, weather in the ascent corridor, leakage in the Falcon-9 booster stage, and leakage in the Zvezda Module of the ISS, the space agency said: 'The Isro delegation played a constructive role in resolving all the issues. It made its stand clear that all problems should be resolved before clearing the launch for lift-off, considering the mission risks and safety of the crew. ' The Indian astronauts —Shukla and his backup Group Captain Prashanth B Nair — have trained in spacecraft systems, microgravity operations, emergency protocols, space medicine, and experiment handling. As part of the programme, he also participated in Nasa's National Outdoor Leadership Programme (NOLPS) in the Mexican wilderness, designed to build team resilience and psychological readiness.