OpenAI's head of ChatGPT shares the one trait you need to be successful at the company
Succeeding at the company, however, requires the ability to work without a playbook.
"Approaching each scenario from scratch is so important in this space," Nick Turley, the head of ChatGPT, told Lenny Rachitsky on his weekly tech podcast on Saturday. "There is no analogy for what we're building. You can't copy an existing thing."
He added that OpenAI cannot iterate on products or features developed by existing tech giants like Instagram, Google, or a company already known for its productivity tool.
"You can learn from everywhere, but you have to do it from scratch. That's why that trait tends to make someone effective at OpenAI, and it's something we test for," he said, referring to an employee's ability to start a project from the ground up.
Turley also discussed OpenAI's release strategies. The company unveiled GPT-5 on Thursday to mixed reviews. He said OpenAI intentionally ships features before they're fully ready.
"We get a lot of crap for the model chooser," he said, referring to the now-discontinued dropdown menu in ChatGPT that allowed users to select between OpenAI's models.
Turley's thesis is that it's better to "ship out something raw even if it makes less sense" so you can start learning from real users. Companies often wait to unveil productsbecause they want to adhere to "a quality bar," but it's actually better to roll out features quickly to ensure you get the feedback you need, he said.
"The reason really is that you're gonna be polishing the wrong things in this space," he said. "You won't know what to polish until after you ship" and that is "uniquely true in an environment where the properties of your product are emergent."
The launch of GPT-5 is an example of this philosophy at work.
After criticism, the model switcher has been removed and replaced with a "real-time router" that selects the most appropriate model to handle each user request.
"Previously, you had to go deal with the model picker in ChatGPT," OpenAI COO Brad Lightcap said in an interview with Big Technology, a tech newsletter, on Friday. "You had to select a model that you wanted to use for a given task. And then you'd run the process of asking a question, getting an answer. Sometimes you choose a thinking model, sometimes you wouldn't. And that was, I think, a confusing experience for users."
Of course, it's impossible to satisfy everyone. After GPT-5's release, users immediately complained that they could no longer access a fan-favorite older model. So OpenAI CEO Sam Altman said he'd bring it back.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
19 minutes ago
- Bloomberg
Ex-OpenAI Researcher Raises Fund to Back Emerging Talent
Leonis Capital has secured $25 million for a new fund to help uncover the next OpenAI. The fund is backed by institutional investors and a mix of researchers and executives from companies like Nvidia Corp, OpenAI and Anthropic, the San Francisco-based venture capital firm said in a statement. Leonis, founded in 2021, has fully deployed its first $10 million fund across AI-native startups including MaintainX and Motion.
Yahoo
2 hours ago
- Yahoo
Elon Musk threatens to sue Apple over app store ranking of his AI app
Elon Musk said his artificial intelligence start up xAI will sue Apple for allegedly favoring AI competitors in its App Store in what he called a breach of antitrust regulations. In a series of posts on X on Monday, a social media platform he owns, he accused Apple of only allowing xAI rival OpenAI's ChatGPT to claim the top spot in its rankings for mobile applications, and said his AI company would take 'immediate legal action.' 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,' he wrote. Grok, xAI's AI model, currently ranks 6th in the App Store's 'Top Free Apps' section for iPhones in the United States, while ChatGPT tops the list. CNN has reached out to xAI, Apple and OpenAI for comment. 'Apple's App Store curation appears biased, favoring established AI like ChatGPT (overall) over innovative challengers,' Grok wrote in a post on X, which was reposted by Musk. 'Editorial picks may reflect caution toward xAI's unfiltered style, but this stifles competition. Truth matters more than politics,' it wrote. Neither Grok nor Musk provided evidence to their claims. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
2 hours ago
- Business Insider
The CEO of Google DeepMind says one flaw is holding AI back from reaching full AGI
The one thing keeping AI from full AGI? Consistency. On an episode of the "Google for Developers" podcast published Tuesday, Google DeepMind CEO Demis Hassabis said that advanced models like Google's Gemini still stumble over problems most schoolkids could solve. "It shouldn't be that easy for the average person to just find a trivial flaw in the system," he said. He pointed to Gemini models enhanced with DeepThink — a reasoning-boosting technique — that can win gold medals at the International Mathematical Olympiad, the world's most prestigious math competition. But those same systems can "still make simple mistakes in high school maths," he said, calling them "uneven intelligences" or "jagged intelligences." "Some dimensions, they're really good; other dimensions, their weaknesses can be exposed quite easily," he added. Hassabis's position aligns with Google CEO Sundar Pichai, who has dubbed the current stage of development "AJI" — artificial jagged intelligence. Pichai used the term on an episode of Lex Fridman's podcast that aired in June to describe systems that excel in some areas but fail in others. Hassabis said solving AI's issues with inconsistency will take more than scaling up data and computing. "Some missing capabilities in reasoning and planning in memory" still need to be cracked, he added. He said the industry also needs better testing and "new, harder benchmarks" to determine precisely what the models excel at, and what they don't. Hassabis and Google did not respond to a request for comment from Business Insider. Big Tech hasn't cracked AGI Big Tech players like Google and OpenAI are working toward achieving AGI, a theoretical threshold where AI can reason like humans. Hassabis said in April that AGI will arrive "in the next five to 10 years." AI systems remain prone to hallucinations, misinformation, and basic errors. OpenAI CEO Sam Altman had a similar take ahead of the launch of GPT-5 last week. While calling his firm's model a significant advancement, he told reporters it still falls short of true AGI. "This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we're still missing something quite important, or many things quite important," Altman said during a press call. Altman added that one of those missing elements is the model's ability to learn independently. "One big one is, you know, this is not a model that continuously learns as it's deployed from the new things it finds, which is something that to me feels like AGI. But the level of intelligence here, the level of capability, it feels like a huge improvement," he said.