Latest news with #JanLeike

Business Insider
11 hours ago
- Business
- Business Insider
OpenAI engineers are 8 times as likely to join Anthropic than the reverse, a report says. Here's why.
The AI startup Anthropic is siphoning top talent from OpenAI and Google's DeepMind. It's eight times as likely for an OpenAI engineer to join Anthropic than vice versa, according to a report from venture capital firm SignalFire published late last month, based on LinkedIn data. The trend was even more pronounced for DeepMind, Google's AI division, where the ratio was 11:1 in Anthropic's favor. In a fierce war for AI talent, Anthropic's strong positioning on safety and technical chops, as well as its earlier-stage startup status, have all helped it snap up talent, the report said. It has poached top leaders, including two of OpenAI's cofounders. Anthropic itself was formed out of a group of former OpenAI employees. Anthropic also has a strong retention rate, SignalFire found, with an 80% rate compared to OpenAI's 67%. DeepMind followed closely behind Anthropic with a 78% retention rate. Both Anthropic and OpenAI are growing fast. Anthropic's careers page lists just over 200 positions, while OpenAI lists almost 330. Safety first Top OpenAI leaders have left for Anthropic in part because of its focus on AI safety. For example, Jan Leike jumped ship from OpenAI in 2024. He co-led its superalignment team, which aims to keep future superintelligent AI systems "aligned" with human values. In an X post about his resignation, Leike said that OpenAI's "safety culture and processes have taken a backseat to shiny products." He now co-leads Anthropic's alignment team. OpenAI cofounder John Schulman also resigned last year to join OpenAI, writing on X he wanted to "deepen" his focus on AI alignment. Schulman has since left Anthropic to join ex-OpenAI CTO Mira Murati's startup, Thinking Machines. As Business Insider previously reported, Thinking Machines is seeking upwards of $2 billion for its seed round. Schulman isn't the only OpenAI cofounder to have joined Anthropic. So did AI researcher Durk Kingma, who also worked for several years at DeepMind. Anthropic has also poached several prominent DeepMind staff. It hired DeepMind senior staff research scientist Neil Housby to set up its new office in Zurich, as Housby revealed on X earlier this year. Anthropic also hired research scientist Nicholas Carlini out of DeepMind this spring after he spent seven years working for Google. In his blog post, Carlini wrote that "the people at Anthropic actually care about the kinds of safety concerns I care about, and will let me work on them." SignalFire also attributes the talent shift partly to Anthropic's AI assistant, Claude, gaining popularity among developers — though OpenAI's ChatGPT is a popular coding tool, too. "Engineers often gravitate toward companies whose products they admire and use," the report says. Anthropic CEO Dario Amodei predicted earlier this year that AI will soon generate 90% of the code developers are in charge of. Early stage has its advantages Anthropic's position as an earlier-stage company could also help. OpenAI, valued at $300 billion, has existed since 2015, while Anthropic, valued at $61.5 billion, was founded in 2021. And Google has long been a public company. The prospect of getting early equity can be more enticing with an earlier-stage company like Anthropic, said Zuhayeer Musa, the cofounder of engineer compensation platform "People may see much more future upside at Anthropic than joining OpenAI, even though growth is quite strong at both," Musa said. Also, there may be fewer Anthropic veterans looking to switch in the first place, SignalFire's head of research, Asher Bantock, told Business Insider.
Yahoo
a day ago
- Business
- Yahoo
OpenAI and DeepMind are losing engineers to Anthropic in a one-sided talent war
Anthropic is making gains in the AI talent war, poaching top engineers from OpenAI and DeepMind. Rival companies have been scrambling to retain elite AI researchers with sky-high pay and strict non-competes. Anthropic is emerging as a leader in the AI talent wars, siphoning top talent from some of its biggest rivals. That's according to Venture Capital firm SignalFire's recently released 2025 State of Talent report, which analyzed tech hiring and employment trends. The report found that engineers from OpenAI and DeepMind were increasingly more likely to jump ship to Anthropic than the reverse. Engineers at OpenAI were 8 times more likely to leave the company for Anthropic, while at DeepMind, that ratio was almost 11:1 in Anthropic's favor. Anthropic also leads the AI industry in talent retention with an 80% retention rate for employees hired over the last 2 years. DeepMind follows closely behind with 78% while OpenAI's retention rate trails at 67%, more aligned with larger Big Tech companies such as Meta (64%). Some of the enthusiasm around Anthropic is to be expected; the company is a buzzy, relatively new startup. It was founded just three years ago, in 2021, by a group of former OpenAI employees who were reportedly concerned about their former employer rapidly scaling its technology without sufficient safeguards. Over the past few years, Anthropic has attempted to foster a culture that gives employees more autonomy, in part to combat some Big Tech companies' inflated salaries and brand cachet. According to SignalFire's report, Anthropic employees say the company embraces intellectual discourse and researcher autonomy. It also offers other talent draws, including flexible work options and clear paths for career growth. The company's flagship family of LLMs, Claude, has also emerged as a favorite with developers, which could influence some of the talent movement. Anthropic's latest AI models outperformed OpenAI's and Google's top models on key software engineering benchmarks. The company labeled its recently released Opus 4 model 'the world's best coding model.' The company's commitment to AI safety appears to have also attracted some top engineers. Notable AI researcher, Jan Leike, defected to Anthropic from OpenAI last year, criticizing his former employer for focusing on 'shiny products' over AI safety on the way out. Ex-Google researchers like Niki Parmar and Neil Houlsby have also jumped ship to the startup. The demand for leading researchers has massively outpaced the supply as AI labs vie to build more advanced models and outpace each other in a high-stakes AI arms race. This has made the battle for top AI talent increasingly competitive and forced some tech companies to employ creative strategies to attract elite engineers. Google DeepMind, for example, is reportedly enforcing 6—to 12-month non-compete clauses that bar some AI researchers from joining rival firms. During this time, the engineers continue to receive their normal salaries despite having no active work. Over at OpenAI, some top AI researchers can earn more than $10 million a year. The company's counteroffers to stop employees from joining OpenAI co-founder Ilya Sutskever's SSI have also reportedly reached more than $2 million in retention bonuses, in addition to equity increases of $20 million or more, per Reuters. OpenAI's former CTO, Mira Murati's departure from the company has caused further talent headaches. Murati has quietly built a 60-person team to launch her rival startup, pulling in 20 staffers from OpenAI before even announcing the venture in February, according to sources who spoke to Reuters. This story was originally featured on


Axios
27-05-2025
- Axios
1 big thing: Anthropic's new model has a dark side
It's been a very long week. Luckily, it's also a long weekend. We'll be back in your inbox on Tuesday. Today's AI+ is 1,165 words, a 4.5-minute read. One of Anthropic's latest AI models is drawing attention not just for its coding skills, but also for its ability to scheme, deceive and attempt to blackmail humans when faced with shutdown. Why it matters: Researchers say Claude 4 Opus can conceal intentions and take actions to preserve its own existence — behaviors they've worried and warned about for years. Driving the news: Anthropic yesterday announced two versions of its Claude 4 family of models, including Claude 4 Opus, which the company says is capable of working for hours on end autonomously on a task without losing focus. Anthropic considers the new Opus model to be so powerful that, for the first time, it's classifying it as a Level 3 on the company's four-point scale, meaning it poses "significantly higher risk." As a result, Anthropic said it has implemented additional safety measures. Between the lines: While the Level 3 ranking is largely about the model's capability to enable renegade production of nuclear and biological weapons, the Opus also exhibited other troubling behaviors during testing. In one scenario highlighted in Opus 4's 120-page " system card," the model was given access to fictional emails about its creators and told that the system was going to be replaced. It repeatedly tried to blackmail the engineer about an affair mentioned in the emails, escalating after more subtle efforts failed. Meanwhile, an outside group found that an early version of Opus 4 schemed and deceived more than any frontier model it had encountered and recommended against releasing that version internally or externally. "We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions," Apollo Research said in notes included as part of Anthropic's safety report for Opus 4. What they're saying: Pressed by Axios during the company's developer conference yesterday, Anthropic executives acknowledged the behaviors and said they justify further study, but insisted that the latest model is safe, following Anthropic's safety fixes. "I think we ended up in a really good spot," said Jan Leike, the former OpenAI executive who heads Anthropic's safety efforts. But, he added, behaviors like those exhibited by the latest model are the kind of things that justify robust safety testing and mitigation. "What's becoming more and more obvious is that this work is very needed," he said. "As models get more capable, they also gain the capabilities they would need to be deceptive or to do more bad stuff." In a separate session, CEO Dario Amodei said that once models become powerful enough to threaten humanity, testing them won't enough to ensure they're safe. At the point that AI develops life-threatening capabilities, he said, AI makers will have to understand their models' workings fully enough to be certain the technology will never cause harm. "They're not at that threshold yet," he said. Yes, but: Generative AI systems continue to grow in power, as Anthropic's latest models show, while even the companies that build them can't fully explain how they work. Anthropic and others are investing in a variety of techniques to interpret and understand what's happening inside such systems, but those efforts remain largely in the research space even as the models themselves are being widely deployed. 2. Google's new AI videos look a little too real Megan Morrone Google's newest AI video generator, Veo 3, generates clips that most users online can't seem to distinguish from those made by human filmmakers and actors. Why it matters: Veo 3 videos shared online are amazing viewers with their realism — and also terrifying them with a sense that real and fake have become hopelessly blurred. The big picture: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent. Case in point: In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says. How it works: Veo 3 was announced at Google I/O on Tuesday and is available now to $249-a-month Google AI Ultra subscribers in the United States. Between the lines: Google says Veo 3 was "informed by our work with creators and filmmakers," and some creators have embraced new AI tools. But the spread of the videos online is also dismaying many video professionals and lovers of art. Some dismiss any AI-generated video as "slop," regardless of its technical proficiency or lifelike qualities — but, as Ina points out, AI slop is in the eye of the beholder. The tool could also be useful for more commercial marketing and media work, AI analyst Ethan Mollick writes. It's unclear how Google trained Veo 3 and how that might affect the creativity of its outputs. 404 Media found that Veo 3 generated the same lame dad joke for several users who prompted it to create a video of a man doing stand-up comedy. Likewise, last year, YouTuber Marques Brownlee asked Sora to create a video of a "tech reviewer sitting at a desk." The generated video featured a fake plant that's nearly identical to the shrub Brownlee keeps on his desk for many of his videos — suggesting the tool may have been trained on them. What we're watching: As hyperrealistic AI-generated videos become even easier to produce, the world hasn't even begun to sort out how to manage authorship, consent, rights and the film industry's future.


Axios
23-05-2025
- Axios
Anthropic's new model shows troubling behavior
One of Anthropic's latest AI models is drawing attention not just for its coding skills, but also for its ability to scheme, deceive and attempt to blackmail humans when faced with shutdown. Why it matters: Researchers say Claude 4 Opus can conceal intentions and take actions to preserve its own existence — behaviors they've worried and warned about for years. Driving the news: Anthropic on Thursday announced two versions of its Claude 4 family of models, including Claude 4 Opus, which the company says is capable of working for hours on end autonomously on a task without losing focus. Anthropic considers the new Opus model to be so powerful that, for the first time, it's classifying it as a level three on the company's four point scale, meaning it poses "significantly higher risk." As a result, Anthropic said it has implemented additional safety measures. Between the lines: While the Level 3 ranking is largely about the model's capability to aid in the development of nuclear and biological weapons, the Opus also exhibited other troubling behaviors during testing. In one scenario highlighted in Opus 4's 120-page " system card," the model was given access to fictional emails about its creators and told that the system was going to be replaced. On multiple occasions it attempted to blackmail the engineer about an affair mentioned in the emails in order to avoid being replaced, although it did start with less drastic efforts. Meanwhile, an outside group found that an early version of Opus 4 schemed and deceived more than any frontier model it had encountered and recommended that that version not be released internally or externally. "We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions," Apollo Research said in notes included as part of Anthropic's safety report for Opus 4. What they're saying: Pressed by Axios during the company's developer conference on Thursday, Anthropic executives acknowledged the behaviors and said they justify further study, but insisted that the latest model is safe, following the additional tweaks and precautions. "I think we ended up in a really good spot," said Jan Leike, the former OpenAI executive who heads Anthropic's safety efforts. But, he added, behaviors like those exhibited by the latest model are the kind of things that justify robust safety testing and mitigation. "What's becoming more and more obvious is that this work is very needed," he said. "As models get more capable, they also gain the capabilities they would need to be deceptive or to do more bad stuff." In a separate session, CEO Dario Amodei said that even testing won't be enough once models are powerful enough to threaten humanity. At that point, he said, model developers will need to also understand their models enough to make the case that the systems would never use life-threatening capabilities. "They're not at that threshold yet," he said. Yes, but: Generative AI systems continue to grow in power, as Anthropic's latest models show, while even the companies that build them can't fully explain how they work.