
Tech talent war heats up: Meta poaches Apple AI researchers after OpenAI
Led by former Scale AI CEO Alexandr Wang and ex-GitHub CEO Nat Friedman, in the last few days, Meta has been poaching AI experts left and right by offering them multi-million-dollar pay packages. Now, a Bloomberg report suggests that Meta has hired Mark Lee and Tom Gunter, two AI researchers who previously worked at Apple.
While Lee has already started working, citing sources familiar with the matter, the report says Gunter will soon be joining Meta. As it turns out, both Lee and Gunter were close to Rouming Pang, the renowned AI researcher who previously led Apple's Foundational Models team and recently joined Meta after the company offered him a multiyear compensation package worth well over $200 million.
When Pang worked for Apple, Lee was his first hire. On the other hand, Gunter had made a name for himself at Cupertino and was regarded as one of the most senior employees. Meta's Superintelligence Labs was reportedly created to bring together the company's foundational AI model teams, product teams and the Fundamental AI research (FAIR) division.
A few days ago, Wired reported that OpenAI researchers Jason Wei and Hyung Won Chung may soon be joining Meta. Wei worked on o3 and deep research models, while Hyung worked on the o1 model and focused on reasoning and agents.
These poachings are part of Mark Zuckerberg's plan to expand its Superintelligence Labs division with talent from around the world. In a post on Threads, the Meta CEO said that his company would 'invest hundreds of billions of dollars into compute to build superintelligence', which refers to a form of advanced technology that can do tasks better than humans.
In the coming days, tech giants and multi-billion-dollar AI startups like Google, OpenAI and Meta might double down on hiring researchers working on AI by offering them even more lucrative packages.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
38 minutes ago
- News18
Indian-Origin Trapit Bansal, Hammad Syed Among 44 Picked For Meta's Superintelligence Unit
A total of 44 people from varied origins were part of the new elite division called Meta Superintelligence. Meta has reportedly been poaching the employees of other AI research companies like OpenAI, Google Deepmind, Anthropic and AI startups to build this elite team. Several reports stated that Meta is likely paying between $10-$100 million per year to these 44 people in the elite team. 2 Indians Among 44 Members Team 2 India-origin were part of the Meta's Superintelligence team. Other than Trapit Bansal, Hammad Syed also joined the elite team as a software engineer. Who Is Hammad Syed? Hammad Syed is the Co-Founder and CEO of according to his LinkedIn profile, a leading AI voice generation platform that helps users create realistic, human-like speech from text using advanced AI models. With a strong focus on synthetic media, serves thousands of content creators, developers, and businesses worldwide. Hammad has a background in technology and entrepreneurship. He pursued his education at Sir M Visvesvaraya Institute of Technology in Bangalore, India, as per Crunchbase. Who Is Trapit Bansal?


NDTV
40 minutes ago
- NDTV
Names Of 44 People In Meta's Superintelligence Team Revealed, Only 2 Indians Make The Cut
Meta is aggressively poaching top AI talent, led by CEO Mark Zuckerberg, to build its Superintelligence Team. The team is part of Meta's ambitious plan to develop artificial general intelligence (AGI) and superintelligence. Meta's recruitment strategy involves offering substantial compensation packages, sometimes exceeding $150 million, to attract top talent from competitors like Apple, Google, and OpenAI. Recently, a social media user shared a list of 44 employees allegedly working on Meta's AI project, claiming the list was obtained from an anonymous employee. According to the post, these employees are likely earning between $10 million and $100 million per year. Out of 44, only two Indian-origin researchers, Trapit Bansal and Hammad Syed, have been included in Meta's AI team. Notably, half of the hires are reportedly from China, with around 75% holding PhDs and 70% being researchers. The list also suggests that Meta's recruitment efforts have been successful in poaching talent from top companies, with 40% of the hires coming from OpenAI, 20% from Google's DeepMind, and 15% from Scale. See the post here: 🚨 BREAKING: Detailed list of all 44 people in Meta's Superintelligence team. — 50% from China — 75% have PhDs, 70% Researchers — 40% from OpenAI, 20% DeepMind, 15% Scale — 20% L8+ level — 75% 1st gen immigrants Each of these people are likely getting paid $10-$100M/yr. — Deedy (@deedydas) July 19, 2025 Who are the 2 Indians in the list? Trapit Bansal, an IIT Kanpur alumnus with a PhD from the University of Massachusetts Amherst, specialises in meta-learning, deep learning, and natural language processing. He has worked at top AI institutions like OpenAI, Microsoft Research, Google Research, and Facebook. He was involved in developing OpenAI's O-series AI models and has collaborated with notable AI researchers like Ilya Sutskever. Hammad Syed, a recent addition to Meta, co-founded voice startup PlayAI with Mahmoud Felfel in 2021. PlayAI specialises in creating lifelike text-to-speech models and voice agents in over 30 languages. According to a Bloomberg report citing an internal memo, the entire PlayAI team is set to join Meta, further bolstering the company's AI capabilities. About Meta's Superintelligence Labs Meta Superintelligence Labs (MSL) is a division of Meta, announced by CEO Mark Zuckerberg in July 2025, aimed at advancing artificial general intelligence (AGI) to achieve "superintelligence". MSL consolidates Meta's AI efforts, including its foundation model teams, product teams, Fundamental AI Research (FAIR) division, and a new lab focused on next-generation large language models (LLMs). The initiative reflects Meta's ambition to compete with leading AI organisations like OpenAI, Google, and Anthropic, following setbacks with its Llama 4 model and internal challenges like staff departures and underperforming product releases. MSL is led by Alexandr Wang, former CEO of Scale AI, who serves as Meta's Chief AI Officer. Nat Friedman, former GitHub CEO and a prominent AI investor, co-leads MSL, focusing on AI products and applied research.


Mint
an hour ago
- Mint
Mint Explainer: Is OpenAI exaggerating the powers of its new ChatGPT Agent?
Leslie D'Monte OpenAI has flagged the agent as high-risk under its safety framework. Is this just marketing hype or a sign that AI is genuinely becoming more powerful and autonomous? OpenAI CEO Sam Altman. Photo AFP Gift this article On Thursday, OpenAI launched its autonomous ChatGPT Agent, a tool that's capable of finding and buying things online, managing your calendar, and booking you an appointment with a doctor. It's essentially a digital assistant that doesn't just provide information but complete actual tasks. On Thursday, OpenAI launched its autonomous ChatGPT Agent, a tool that's capable of finding and buying things online, managing your calendar, and booking you an appointment with a doctor. It's essentially a digital assistant that doesn't just provide information but complete actual tasks. That being said, OpenAI has flagged the agent as high-risk under its safety framework, warning it could potentially be used to create dangerous biological or chemical substances. Is this just marketing hype, timed to build momentum for the launch of GPT-5, or a sign that AI agents are genuinely becoming more powerful and autonomous, akin to the agents who protect the computer-generated world of The Matrix? What is ChatGPT Agent? Say you want to rearrange your calendar, find a doctor and schedule an appointment, or research competitors and deliver a report. ChatGPT Agent can now do it for you. Also Read | Deep research with AI is days' worth of work in minutes The agent can browse websites, run code, analyse data, and even create slide decks or spreadsheets—all based on your instructions. It combines the strengths of OpenAI's earlier tools—operator (which could navigate the web) and deep research (which could analyse and summarise information)—into a single system. You stay in control throughout: ChatGPT asks for permission before doing anything important, and you can stop or take over at any time. This new capability is available to Pro, Plus, and Team users through the tools dropdown. How does it work? ChatGPT Auses a powerful set of tools to complete tasks, including a visual browser to interact with websites like a human, a text-based browser for reasoning-heavy searches, a terminal for code execution, and direct application programming interface (API) access. It can also connect to apps such as Gmail or GitHub to fetch relevant information. You can log in to websites within the agent's browser, allowing it to dig deeper into personalised content. All of this runs on its own virtual computer, which keeps track of context even across multiple tools. The agent can switch between browsers, download and edit files, and adapt its methods to complete tasks quickly and accurately. It's built for back-and-forth collaboration—you can step in anytime to guide or change the task, and ChatGPT can ask for more input when needed. If a task takes time, you'll get updates and a notification on your phone once it's done. Has OpenAI tested its performance? OpenAI said on Humanity's Last Exam (HLE), which tests expert-level reasoning across subjects, ChatGPT Agent achieved a new high score of 41.6, rising to 44.4 when multiple attempts were run in parallel and the most confident response was selected. On FrontierMath, the toughest known math benchmark, the agent scored 27.4% using tools such as a code-executing terminal—far ahead of previous models. In real-world tasks, ChatGPT agent performs at or above human levels in about half of the cases, based on OpenAI's internal evaluations. These tasks include building financial models, analysing competitors, and identifying suitable sites for green hydrogen projects. ChatGPT Agent also outperforms others on specialised tests such as DSBench for data science, and the SpreadsheetBench for spreadsheet editing (45.5% vs Copilot Excel's 20.0%). On BrowseComp and WebArena, which test browsing skills, the agent achieves the highest scores to date, according to OpenAI. What are some of the things it can do? Consider the case of travel planning. The agent won't just suggest ideas but navigate booking websites, fill out forms, and even make reservations one you give it permission. You can also ask it to read your emails, find meeting invitations, and automatically schedule appointments in your calendar, or even draft and send follow-up emails. This level of coordination typically required juggling between apps, but the agent manages it in a single conversational flow. Another example involves shopping and price comparison. You can tell the agent to 'order the best-reviewed smartphone under ₹ 15,000", and it can search online stores, compare prices and reviews, and proceed to checkout on a preferred platform. Customer support and task automation are other examples, where the agent is used to troubleshoot an issue, log into support portals, and even file return or refund requests. How are AI agents typically built? Unlike basic chat bots, AI agents are autonomous systems that can plan, reason, and complete complex, multi-step tasks with minimal input—such as coding, data analysis, or generating reports. They are built by combining ways to take in information, think, and take action. Developers begin by deciding what the agent should do, following which the agent collects data like such as or images from its environment. AI agents use large language models (LLMs) like GPT-4 as their core 'brain", which allows them to understand and respond to natural language instructions. To allow AI agents to take action, developers connect the LLM to things like a web browser, code editor, calculator, and APIs for services such as Gmail or Slack. Frameworks like LangChain help integrate these parts, and keep track of information. Some AI agents learn from experience and get better over time. Testing and careful setup make sure they work well and follow rules. Does ChatGPT Agent have credible competition? Google's Project Astra, part of its Gemini AI line, is developing a multimodal assistant that can see, hear, and respond in real time. Gemini CLI is an open-source AI agent that brings Google's Gemini model directly to the terminal for fast, lightweight access. It integrates with Gemini Code Assist, offering developers on all plans AI-powered coding in both VS Code and the command line. Microsoft is embedding Copilot into Windows, Office, and Teams, giving its agent access to workflows, system controls, and productivity tools, soon enhanced by a dedicated Copilot Runtime. Meta is building more socially focused agents within messaging and the metaverse, which could evolve into utility tools. Apple is revamping Siri through Apple Intelligence, combining GPT-level reasoning with strict privacy features and deep on-device integration. Other smart agents include Oracle's Miracle Agent, IBM's Watson tools, Agentforce from Salesforce Anthropic's Claude 3.5, and Perplexity AI's action-oriented agents through its Comet project, blending search with agentic behaviour. The competitive advantage, though, may go to companies that can integrate these AI agents into everyday applications and call for action with a single, unified tool – a task that ChatGPT Agent has demonstrated. Why did OpenAI warn that ChatGPT Agent could be used to trigger biological warfare? OpenAI claimed ChatGPT Agent's superior capabilities could, in theory, be misused to help someone create dangerous biological or chemical substances. However, it clarified that there was no solid evidence it could actually do so. Regardless, OpenAI is activating the highest level of safety measures under its internal 'preparedness framework'. These include thorough threat modeling to anticipate potential misuse, special training to ensure the model refuses harmful requests, and constant monitoring using automated systems that watch for risky behaviour. There are also clear procedures in place for suspicious activity. Should we take this risk seriously? Ja-Nae Duane, AI expert and MIT Research Fellow and co-author of SuperShifts, said the more autonomous the agent, the more permissions and access rights it would require. For example, buying a dress requires wallet access; scheduling an event requires calendar and contact list access. 'While standard ChatGPT already presents privacy risks, the risks from ChatGPT Agent are exponentially higher because people will be granting it access rights to external tools containing personal information (like calendar, email, wallet, and more). There's a significant gap between the pace of AI development and AI literacy; many people haven't even fully understood ChatGPT's existing privacy risks, and now they're being introduced to a feature with exponentially more risks," he said. Also Read | Google's Veo 3 brings the era of video on command Duane added that the key risks included data leaks, mistaken actions, prompt injection, and account compromise, especially when handling sensitive information. Malicious actors, he warned, could exploit them by manipulating inputs, abusing tool access, stealing credentials, or poisoning data to bias outputs. Poor third-party integration and an over-reliance of them could worsen the impact, while the agent's 'black box" nature would make it hard to trace errors, he added. In the wrong hands, these agents could be weaponised for fraud, phishing, or even to generate malware. What are the other concern areas for enterprises? Developers are increasingly deploying AI agents across IT, customer service, and enterprise workflows. According to Nasscom, 46% of Indian firms are experimenting with these agents, particularly in IT, HR, and finance, while manufacturing leads in robotics, quality control, and automation. Beyond concerns around hallucinations, security, privacy, and copyright or intellectual property (IP) violations, a key challenge for businesses is ensuring a return on investment. Gartner noted that many so-called agentic use cases could be handled by simpler tools and predicted that more than 40% of such projects would be scrapped by 2027 over high costs, unclear value, or inadequate risk controls. Of the thousands of vendors in this space, only around 130 are seen as credible; many engage in 'agent washing" by repackaging chatbots, robotic process automation (RPA), or basic assistants as autonomous agents. Nasscom corroborated these concerns, highlighting that 62% of enterprises were still only testing agents in-house. Why is 'humans-in-the-loop' a must? OpenAI CEO Sam Altman advised granting agents only the minimum access needed for each task, not blanket permissions. Nasscom believes that to scale responsibly, enterprises must prioritise human-AI collaboration, trust, and data readiness. It has recommended firms adopt AI agents with a 'human-in-the-loop" approach, reflecting the need for oversight and contextual judgment. According to Duane, users must understand both the tool's strengths and its limits, especially when handling sensitive data. Caution is key, as misuse could have serious consequences. She also emphasised the importance of AI literacy, noting that AI was evolving far faster than most people's understanding of how to use it responsibly. Also Read | Mint Primer: Are firms wasting their money on AI agents? Topics You May Be Interested In