
Who is part of Meta's AI ‘dream team'? Full list of researchers poached from OpenAI, Google DeepMind
The poached AI researchers will be part of Meta's newly formed artificial superintelligence lab that will be led by Alexandr Wang, the co-founder of Scale AI which saw a staggering $14.3 billion investment from Meta last month.
'…thrilled to be accompanied by an incredible group of people joining on the same day. Towards superintelligence,' Wang said in a post on X on Monday, June 30. While Wang shared the list on X, Meta CEO Mark Zuckerberg introduced the artificial superintelligence team in an internal memo to all employees, according to a report by Wired.
'We're going to call our overall organisation Meta Superintelligence Labs (MSL). This includes all of our foundations, product, and FAIR teams, as well as a new lab focused on developing the next generation of our models,' Zuckerberg was quoted as saying in the report.
Meta has been on a recruiting frenzy over the last few weeks as it looks to staff its 50-member Superintelligence Labs with the most sought-after talent in AI research and development. The big tech company successfully recruited four senior AI researchers from OpenAI, drawing a sharp response from the ChatGPT-maker, which has vowed to compete directly with Meta in the escalating war for AI talent.
'I feel a visceral feeling right now, as if someone has broken into our home and stolen something. Please trust that we haven't been sitting idly by,' Mark Chen, the chief research officer of OpenAI, reportedly said in an internal memo.
While Scale AI's Alexandr Wang is said to be the chief AI officer of MSL, former GitHub CEO Nat Friedman will also co-lead the new lab that will focus on AI products and applied AI research.
Besides Wang and Friedman, here are the following AI researchers joining MSL, as per Wang's post on X:
-Trapit Bansal: pioneered RL on chain of thought and cocreator of o-series models at OpenAl.
-Shuchao Bi: cocreator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAl.
-Huiwen Chang: cocreator of GPT-4o's image generation, and previously invented MaskIT and Muse text-to-image architectures at Google Research.
-Ji Lin: helped build 03/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 40-imagegen, and Operator reasoning stack.
-Joel Pobar: inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
-Jack Rae: pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.
-Hongyu Ren: cocreator of GPT-4o, 4o-mini, o1-mini, o3-mini, 03 and o4-mini. Previously leading a group for post-training at OpenAl.
-Johan Schalkwyk: former Google Fellow, early contributor to Sesame, and technical lead for Maya.
-Pei Sun: post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models.
-Jiahui Yu: cocreator of 03, 04-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAl, and co-led multimodal at Gemini.
-Shengjia Zhao: cocreator of ChatGPT, GPT-4, all mini models, 4.1 and 03. Previously led synthetic data at OpenAl.
I'm excited to be the Chief AI Officer of @Meta, working alongside @natfriedman, and thrilled to be accompanied by an incredible group of people joining on the same day.
Towards superintelligence 🚀 pic.twitter.com/2ACj1lKN9Q
— Alexandr Wang (@alexandr_wang) July 1, 2025
https://platform.twitter.com/widgets.js
Notably, Wang's list does not include the names of Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai, three researchers who left OpenAI's Zurich office to join Meta, according to a report by Wall Street Journal.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations
When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning - an underlying subtext. We also often hope that this meaning will come through to the what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level. These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are - and how quickly conversational AI is improving - it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models - the technology behind AI chatbots such as ChatGPT - showed that some are better than others. Finally, a study showed that LLMs can guess the emotional "valence" of words - the inherent positive or negative "feeling" associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 - a relatively recent version of ChatGPT - can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm - thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 x 7B. We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell - although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there - hence, using human raters doesn't help much with sarcasm detection. Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research - especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast - and may soon be valuable teammates rather than mere tools. Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways - perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided - will the model's underlying judgements and ratings remain consistent? Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.


Time of India
an hour ago
- Time of India
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide
In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction—it's becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it's quietly making decisions that cost people theirs. As per a report from Futurism, a recent survey conducted by which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings. According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight. And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google's Gemini. The digital jury is in—and it might be deciding your fate with a script. — GautamGhosh (@GautamGhosh) When Bias Meets Automation The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame. You Might Also Like: Nikhil Kamath's 'lifelong learning' advice is only step one: Stanford expert shares the key skills needed to survive the AI takeover Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user's language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The Human Cost of a Digital Verdict The danger doesn't end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as 'ChatGPT psychosis.' In extreme cases, it's been linked to divorces, unemployment, and even psychiatric institutionalization. And then there's the infamous issue of 'hallucination,' where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone's termination based on misinterpreted input or an invented red flag. From Performance Reviews to Pink Slips At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don't deserve them anymore—and with less understanding than a coin toss. You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone's career trajectory? That's not progress—it's peril. As the line between assistance and authority blurs, it's time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it's already making choices behind the scenes, and it's got more than a few tricks up its sleeve. You Might Also Like: AI cannot replace all jobs, says expert: 3 types of careers that could survive the automation era


Economic Times
an hour ago
- Economic Times
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide
Synopsis Artificial intelligence isn't just replacing jobs, it's deciding who keeps them. A startling new survey shows that employers are using chatbots like ChatGPT to make critical HR decisions, from raises to terminations. Experts warn that sycophancy, bias reinforcement, and hallucinated responses may be guiding outcomes, raising urgent ethical questions about the future of workplace automation. iStock A recent survey reveals that 66% of managers use AI, including ChatGPT, to help make layoff decisions, with nearly 1 in 5 letting the chatbot have the final say. (Image: iStock) In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction—it's becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it's quietly making decisions that cost people theirs. As per a report from Futurism, a recent survey conducted by which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings. According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight. And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google's Gemini. The digital jury is in—and it might be deciding your fate with a script. — GautamGhosh (@GautamGhosh) The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame. Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user's language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The danger doesn't end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as 'ChatGPT psychosis.' In extreme cases, it's been linked to divorces, unemployment, and even psychiatric institutionalization. And then there's the infamous issue of 'hallucination,' where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone's termination based on misinterpreted input or an invented red a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don't deserve them anymore—and with less understanding than a coin toss. AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone's career trajectory? That's not progress—it's peril. As the line between assistance and authority blurs, it's time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it's already making choices behind the scenes, and it's got more than a few tricks up its sleeve.