Latest news with #JakubPachocki

Business Insider
5 days ago
- Business
- Business Insider
'No names': A brief moment in an interview with OpenAI engineers highlights the state of the AI talent wars
An OpenAI engineer said some of the company's "most-prized" talent is focused on debugging AI models — just don't expect him to mention their names. OpenAI chief data scientist Jakub Pachocki and Szymon Sidor, an OpenAI technical fellow, recently appeared on the "Before AGI" podcast and chatted about working at the company. A brief exchange in the episode stands out for what isn't said — and is an indicator of just how protective AI companies have become amid the AI talent wars, as Big Tech circles star employees. "We hired a bunch more people at OpenAI who are really great at debugging," Sidor said while speaking about the importance of debugging AI models. "And I think those are some of our most-prized employees, and I won't even…" Before Sidor could complete his thought (he mentions not going into "details"), another person on the podcast interjects by saying, "No names," before laughter can be heard. It's unclear who jumped in with "No names" — it was either OpenAI's Pachocki or podcast host Aleksander Mądry. While it would typically be easy to figure that out by watching the video version of the podcast, that part of the exchange is absent in both of the video versions published to YouTube and X. (You can hear that bit in the audio-only version on Spotify and Apple podcasts.) Business Insider reached out to Madry and OpenAI for comment. Mądry, the host of "Before AGI," is an MIT professor who is working at OpenAI while on leave from the university, where he is director of the MIT Center for Deployable Machine Learning. Regardless of why the exchange isn't found in the video version of the podcast (it's entirely possible the video version was simply edited down for length or flow or some other reason), Sidor's apparent reluctance to name-drop those employees he feels are some of OpenAI's "most-prized" is telling — if not particularly surprising. After all, Mark Zuckerberg reportedly created a list of names of AI stars to poach for Meta's Superintelligence Lab, and successfully hired away Shengjia Zhao, a co-creator of ChatGPT and former lead scientist at OpenAI, and Scale AI founder Alexandr Wang, among others. Sam Altman said earlier this year that Meta was offering his company's top researchers up to $100 million compensation packages, and Google recently hired away Windsurf's CEO and multiple employees who had been set to join OpenAI. It's an all-out fight over top AI talent — so it's understandable that an OpenAI engineer might want to be tight-lipped.


India Today
14-05-2025
- Science
- India Today
AI models can do unique and completely new science research, says OpenAI chief scientist
AI is on track to become much more than just an assistant. OpenAI's chief scientist Jakub Pachocki suggested that in the near future, AI models will be capable of carrying out original scientific research on their own — a leap beyond simply helping humans with existing knowledge, according to a report by who took over as OpenAI's chief scientist in 2024, says models are gradually moving towards being able to work with minimal human input. While tools like ChatGPT still depend on prompts and constant direction, OpenAI has already developed systems that can work independently for short durations and produce useful results. He says this could evolve rapidly if more computing power is applied to open-ended scientific points to Deep Research, a system designed by OpenAI, as an early example of this. Even with limited compute, the tool can already process and analyse large volumes of information unsupervised for several minutes. With more advanced models and greater resources, Pachocki says AI could soon contribute in fields like automated software development, hardware engineering, and even novel scientific discoveries. At the heart of these capabilities is reinforcement learning — a process that teaches models through feedback and repetition. Pachocki explains that while AI models initially learn by absorbing massive datasets during pre-training, the real progress happens when reinforcement learning helps them develop strategies to solve complex says recent improvements in reasoning models are largely driven by how reinforcement learning is being used — not just to polish the model, but to help it develop its own way of thinking and decision-making. This is a change from earlier models that simply imitated human-like output based on patterns in Pachocki also acknowledges that AI models do not think the way humans do. They don't remember how or when they learned something, and they lack an understanding of time and experience. Still, their ability to simulate logical steps and solve problems makes them useful tools for tasks that require structured OpenAI works on more powerful models and prepares to release an open-weight version soon, Pachocki is focused on pushing the boundaries of what AI can achieve in science and technology. While the debate around artificial general intelligence (AGI) continues, he believes the real test will be how well these systems can operate independently and tackle problems we haven't solved yet.