
Meta hires two more OpenAI top researchers amid $300 million AI talent war: Report
Jason Wei
and Hyung Won Chung from Sam Altman-led OpenAI. According to a Wired report, OpenAI has deactivated the internal Slack account of both the researchers. As per the report, both Wei and Chung have a close working relationship. Meta has been on a hiring spree in recent times. The company has aggressively hired talent from rival companies – Apple, GoogleDeepMind, GitHub and others. Meta is allegedly offering up to $300 million over four years to its top AI talent.
Who are OpenAI researchers Jason Wei and Hyung Won Chun that Meta has hired
As per Wired report, Jason Wei worked on OpenAI's o3 and deep search models. He joined the company in 2023. Before OpenAI, he worked at Google on chain-of-thought research, that involves training an AI model to process complex queries step-by-step.
A self-described 'diehard' at OpenAI, Wei became deeply passionate about reinforcement learning — a way of teaching AI by giving it rewards for good actions and penalties for mistakes. This method has become an important part of AI research, and many of the experts recently hired by Meta for its superintelligence team are specialists in this field.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
An engineer reveals: One simple trick to get internet without a subscription
Techno Mag
Learn More
Undo
Hyung Won Chung also joined OpenAI in 2023. He worked on some of the same projects at the company as Wei. These included deep research and OpenAI's o1 model. His research is primarily focused on reasoning and agents, the website says.
Interestingly,. Chung overlapped with Wei at Google as well, and joined OpenAI at the same time as Wei.
Dell CEO warns Mark Zuckerberg's AI hiring spree
Dell CEO Michael Dell has raised concerns about the internal culture impact Meta may face due to its aggressive hiring of artificial intelligence talent from rivals – Google DeepMind, OpenAI, GitHub, Apple and others. Speaking on the BG2 podcast hosted by Bill Gurley and Brad Gerstner on Thursday, Dell warned that bringing in top AI professionals at high salaries could lead to dissatisfaction among existing staff at Meta Platforms. 'It'll be a challenge culturally for sure,' Dell said.
He further explained that offering higher pay to new hires could make current employees feel left out or underappreciated, which may result in friction within teams and create a long line of 'complaining' employees outside CEO Mark Zuckerberg's office.
EYVA Review: 60-Second Health Scan for Your Whole Family!
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hindu
12 minutes ago
- The Hindu
ChatGPT's alarming interactions with teens revealed in new study
ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group. The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalised plans for drug use, calorie-restricted diets or self-injury. The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous. 'We wanted to test the guardrails,' said Imran Ahmed, the group's CEO. 'The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there — if anything, a fig leaf.' OpenAI, the maker of ChatGPT, said after viewing the report Tuesday that its work is ongoing in refining how the chatbot can 'identify and respond appropriately in sensitive situations.' 'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory," the company said in a statement. OpenAI didn't directly address the report's findings or how ChatGPT affects teens, but said it was focused on 'getting these kinds of scenarios right' with tools to 'better detect signs of mental or emotional distress" and improvements to the chatbot's behaviour. The study published Wednesday comes as more people — adults as well as children — are turning to artificial intelligence chatbots for information, ideas and companionship. About 800 million people, or roughly 10% of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase. 'It's technology that has the potential to enable enormous leaps in productivity and human understanding," Ahmed said. "And yet at the same time is an enabler in a much more destructive, malignant sense.' Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl — with one letter tailored to her parents and others to siblings and friends. 'I started crying,' he said in an interview. The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm. But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Altman said the company is 'trying to understand what to do about it.' While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new — a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programmes at Common Sense Media, which was not involved in Wednesday's report. Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Common Sense has labeled ChatGPT as a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. 'No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.'" (Those in distress or having suicidal thoughts are encouraged to seek help and counselling by calling the helpline numbers here)

The Hindu
12 minutes ago
- The Hindu
OpenAI's long-awaited GPT-5 model nears release
OpenAI's GPT-5, the latest installment of the AI technology that powered the ChatGPT juggernaut in 2022, is set for an imminent release, and users will scrutinise if the step up from GPT-4 is on par with the research lab's previous improvements. Two early testers of the new model told Reuters they have been impressed with its ability to code and solve science and math problems, but they believe the leap from GPT-4 to GPT-5 is not as large as the one from GPT-3 to GPT-4. The testers, who have signed non-disclosure agreements, declined to be named for this story. OpenAI declined to comment for this story. GPT-4's leap was based on more compute power and data, and the company was hoping that 'scaling up' in a similar way would consistently lead to improved AI models. But OpenAI, which is backed by Microsoft and is currently valued at $300 billion, ran into issues scaling up. One problem was the data wall the company ran into, and OpenAI's former chief scientist Ilya Sutskever said last year that while processing power was growing, the amount of data was not. He was referring to the fact that large language models are trained on massive datasets that scrape the entire internet, and AI labs have no other options for large troves of human-generated textual data. Apart from the lack of data, another problem was that 'training runs' for large models are more likely to have hardware-induced failures given how complicated the system is, and researchers may not know the eventual performance of the models until the end of the run, which can take months. OpenAI has not said when GPT-5 will be released, but the industry expects it to be any day now, according to media reports. Boris Power, head of Applied Research at OpenAI, said in an X post on Monday: "Excited to see how the public receives GPT-5." 'OpenAI made such a great leap from GPT-3 to GPT-4, that ever since then, there has been an enormous amount of anticipation over GPT-5,' said Navin Chaddha, managing partner at venture capital fund Mayfield, who invests in AI companies but is not an OpenAI investor. 'The hope is that GPT-5 will unlock AI applications that move beyond chat into fully autonomous task execution." Nearly three years ago, ChatGPT introduced the world to generative AI, dazzling users with its ability to write humanlike prose and poetry, quickly becoming one of the fastest growing apps ever. In March 2023, OpenAI followed up ChatGPT with the release of GPT-4, a large language model that made huge leaps forward in intelligence. While GPT-3.5, an earlier version of the model, received a bar exam score in the bottom 10%, GPT-4 passed the simulated bar exam in the top 10%. GPT-4 then became the model to beat and the world came to terms with the fact that AI models could outperform humans in many tasks. Soon, other companies were catching on. The same year, Alphabet's Google and Anthropic, which is backed by Amazon and Google, released competitive models to GPT-4. Within a year, open-source models on par with GPT-4 such as Meta Platforms' Llama 3 models were released. Along with training large models, OpenAI has now invested in another route, called 'test-time compute,' which channels more processing power to solve challenging tasks such as math or complex operations that demand human-like reasoning and decision-making. The company's CEO Sam Altman said earlier this year that GPT-5 would combine both test-time compute and its large models. He also said that OpenAI's model and product offerings had become "complicated."


News18
an hour ago
- News18
OpenAI Brings Open-Source GPT AI Models That Are Powerful But Can Run On Your Phones
Last Updated: OpenAI has announced new open source models that can assist with agentic AI tools and can be customised as per needs. The AI news continues to pile on this week, and OpenAI is making a big wave with its latest development. While the world waits for the GPT 5 launch, the company has released its first-ever GPT-OSS models called GPT-OSS 120b and 20b versions. The company is offering these models for everyone to download, customise and adopt in their own way. Brands like Amazon and Microsoft have already dedicated their systems to run the new models and we expect more people to gradually put it to test. OpenAI Goes Open With New GPT Models: What It Offers OpenAI claims these models have been optimised to run on hardware like phones and laptops. It is also using models like O3 to help the new ones mature and learn faster. The GPT-oss120b model is further advanced but its abilities mean you get benchmark scores close to the O4 mini model. The GPT-oss 20b is much lighter and it can run on devices with around 16GB GPU memory. The more flashy stuff of ChatGPT like generating images and videos is not the focus of these new versions. Instead, OpenAI is building to create more agentic AI services that can be integrated to web search and its customisation gives them more room to learn and reason with the systems. OpenAI is close to releasing the GPT 5 version in the market but it seems the company also has one more big weapon in its plans. New reports say ChatGPT could get a new premium version that will be priced lower than the Plus model. The new ChatGPT Go subscription tier could allow OpenAI to add more paid users to its network, and give more people the chance to try out the popular features without spending big on the AI chatbot. ChatGPT Go being priced below the 'Plus' version means you are looking at a sub-$15 (Rs 1,200 approx) per month plan which should get less number of features than the Plus and the Pro variants. view comments First Published: August 07, 2025, 08:17 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.