&w=3840&q=100)
Meta recruits 3 OpenAI researchers amid Sam Altman's poaching claims
Meta CEO Mark Zuckerberg has recruited three researchers from OpenAI to join his 'superintelligence' team, according to a report by The Wall Street Journal on Wednesday. The news comes days after OpenAI CEO Sam Altman accused the Facebook parent company of attempting to lure away its employees.
An OpenAI spokesperson confirmed the departure of the three researchers but declined to provide further details. According to The Wall Street Journal, Meta has hired Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai, all of whom were based at OpenAI's Zurich office.
What is 'Superintelligence'?
Superintelligence refers to a form of artificial intelligence that can outperform humans in all tasks. Currently, AI systems have not reached this level and remain below artificial general intelligence (AGI), which is defined as the ability to match human performance across a wide range of tasks.
Zuckerberg has reportedly taken a personal role in recruiting top AI talent globally. T he Wall Street Journal reported that he has contacted leading researchers, developers, and entrepreneurs directly—via emails and WhatsApp messages.
Last week, Sam Altman claimed that Meta had offered some of his employees $100 million in bonuses as part of their recruitment strategy. This has intensified the ongoing competition for top AI talent. In response, OpenAI has reportedly issued significant counteroffers to retain its key personnel.
Meta's AI setbacks
Once regarded as a frontrunner in open-source AI development, Meta has faced internal challenges, including staff resignations and delays in launching new models designed to compete with offerings from Google, OpenAI, and China's DeepSeek. 'I've heard that Meta thinks of us as their biggest competitor,' Altman remarked.
Meta buys stakes in Scale AI
In a further move to bolster its AI ambitions, Meta recently brought on board 28-year-old Scale AI CEO Alexandr Wang to work on its superintelligence initiatives. The company also acquired a 49 per cent stake in Scale AI for $14.3 billion.
Scale AI is a data labelling firm that processes and categorises information used to train large AI models. Reports suggest that Meta is aiming to reposition itself by assembling a dedicated team of experts to pursue artificial general intelligence.
What is Artificial General Intelligence (AGI)?
Artificial general intelligence (AGI) is a theoretical form of AI that would possess cognitive abilities on par with those of humans. Although not yet realised, AGI would potentially be capable of reasoning, learning, perception, problem-solving, and understanding language—comparable to human intelligence.
An AI system would be considered to have reached AGI if its performance became indistinguishable from that of a human, a benchmark defined by the Turing test, proposed by computer scientist Alan Turing. The test requires that a human being should be unable to distinguish a machine from another human being by using the replies to questions put to both.
However, many experts believe that AGI remains several decades away from becoming a reality.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
17 minutes ago
- Time of India
After Nvidia and Google AI CEOs, top OpenAI executive says Anthropic CEO's AI job warning is 'wrong'
From left: Brad Lightcap, OpenAI chief operating officer; Sam Altman, OpenAI chief executive; and the hosts of the "Hard Fork" podcast, Casey Newton and Kevin Roose, during a live recording of "Hard Fork" at the SFJAZZ Center in San Francisco, June 24, 2025. Altman discussed President Trump's understanding of artificial intelligence, the war for AI talent and OpenAI's relationship with Microsoft. (Mike Kai Chen/The New York Times) ChatGPT-maker OpenAI 's COO, Brad Lightcap, has expressed skepticism regarding predictions made by Anthropic CEO Dario Amodei that artificial intelligence (AI) will kill entry-level white-collar positions. Taking a more measured stance, Lightcap directly addressed Amodei's recent projection that AI could eliminate 50% of entry-level white-collar jobs within the next five years. 'We've seen no evidence of this. Dario is a scientist, and I would hope that he takes an evidence-based approach to these types of things,' Lightcap stated during The New York Times' "Hard Fork" podcast. Lightcap asserted that OpenAI, which collaborates with a vast array of businesses, hasn't observed any impending doom for these employees. 'We work with every business under the sun. We look at the problem and opportunity of deploying AI into every company on earth, and we have yet to see any evidence that people are kind of wholesale replacing entry-level jobs,' he explained. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy One, Get One Free, Up To 50% Discount, Expiring Soon Original Adidas Shop Now Undo Meanwhile, OpenAI CEO Sam Altman echoed Lightcap's sentiments, joining him for the panel interview. Altman suggested that historical precedent indicates innovations like AI typically lead to new job creation. The great AI job debate Amodei's forecast, shared last month, was reportedly intended to galvanise governments and competitors into preparing for future changes. He expressed concern that 'Most of them are unaware that this is about to happen... It sounds crazy, and people just don't believe it.' This contrast in outlook highlights a significant debate among tech leaders. While some executives, including those at Shopify and Duolingo, are encouraging managers to prove that AI cannot fill new roles, figures like Nvidia CEO Jensen Huang, Google DeepMind CEO Demis Hassabis and LinkedIn cofounder Reid Hoffman also hold differing views. Huang is more optimistic that AI will reshape jobs rather than simply eliminate them, and Hoffman, while not predicting a 'bloodbath,' believes AI's full impact is still underestimated. Hassabis said AI will disrupt traditional roles and that the technology will also create valuable and new jobs, asking students and professionals to embrace AI tools. Skullcandy Dime Evo: 5 Features That Make It Stand Out! AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Hans India
28 minutes ago
- Hans India
Meta wins copyright case over AI training, but legal questions remain
Meta, the parent company of Facebook, has won a copyright lawsuit filed by authors including Sarah Silverman and Ta-Nehisi Coates. The authors accused Meta of using pirated copies of their books without permission to train its AI model, Llama. However, US District Judge Vince Chhabria ruled that the plaintiffs failed to provide adequate evidence that Meta's AI system harmed the market for their works. While the ruling was in Meta's favor, the judge clarified that this case does not confirm that training AI with copyrighted content is legal. Instead, it highlights that the plaintiffs didn't present the right legal arguments. Chhabria emphasized that in many circumstances, using copyrighted materials without permission to train AI would be unlawful. This nuanced stance contrasts with another San Francisco judge's ruling earlier this week in favor of AI firm Anthropic, which deemed such usage fair use. The fair use doctrine is a crucial defense for AI companies, allowing limited use of copyrighted materials without explicit permission. Meta welcomed the decision, describing fair use as vital to building transformative AI systems. Meanwhile, the authors' legal team criticized the judgment, citing an 'undisputed record' of Meta's large-scale use of copyrighted works. The broader copyright battle between AI companies and creators continues to intensify. Companies like OpenAI, Microsoft, and Anthropic face multiple lawsuits from writers, journalists, and publishers over the use of copyrighted materials in AI training. Judge Chhabria warned of a future where AI-generated content could flood the market, undermining the value of original human-created works and disincentivizing creativity. Despite the legal win, the debate over AI's use of protected content is far from over.


Hans India
43 minutes ago
- Hans India
WhatsApp introduces AI-powered message summaries with privacy-focused tech
WhatsApp has launched a new feature called Message Summaries, which uses Meta AI to provide quick summaries of unread messages in a chat. Rolled out in English for users in the United States, the feature is aimed at helping users quickly grasp the gist of conversations they've missed without reading each individual message. Displayed in bullet-point format, the AI-generated summaries can be accessed by simply tapping on the unread message count in a chat. These summaries are private and only visible to the individual user. WhatsApp emphasizes that users remain in control—they can choose to opt into the feature and even restrict which chats can be summarized using Advanced Chat Privacy settings. The company underlines that it uses a privacy-preserving technology called Private Processing, which ensures that neither Meta nor WhatsApp can see any user messages while generating the summaries. This aligns with the platform's continued focus on user security and confidentiality. Though the accuracy of the summaries is yet to be independently verified, WhatsApp has plans to expand the feature to more languages and countries later this year. The Message Summaries rollout comes as part of a broader push by Meta to integrate AI into WhatsApp. Recently, features like the Meta AI button and @Meta AI prompts in chats have also been introduced—though unlike Message Summaries, these AI features are turned on by default and currently cannot be disabled. This latest update follows the platform's recent controversial move to introduce advertisements in the app—a step the app's original founders had strongly opposed. Stay tuned as WhatsApp continues to evolve with more AI-driven tools aimed at improving usability while prioritizing privacy.