logo
OpenAI cofounder Greg Brockman says engineers need this one trait to succeed

OpenAI cofounder Greg Brockman says engineers need this one trait to succeed

Business Insider15 hours ago
OpenAI's cofounder and president, Greg Brockman, has one piece of advice for engineers joining his company: Check your ego at the door.
The most critical quality for engineers to succeed at OpenAI is "technical humility, Brockman said at the AI Engineer World's Fair in San Francisco on June 4.
"You're coming in because you have skills that are important," he said in a video recording of the session that was published on AI Engineer's YouTube channel on Monday. "But it's a totally different environment from something like a traditional web startup."
That insight, he said, came from watching culture clashes between colleagues from engineering and research backgrounds.
He said engineers often think, "We've agreed on an interface, I can implement it however I want." Researchers, by contrast, see the system as a whole, where even a tiny bug can quietly degrade performance.
In one early project, Brockman said OpenAI's engineering team ground to a halt debating every line of code.
His solution was simple. He'd propose five ideas, a researcher would reject four, and they'd move forward with the one that remained.
The key for engineers, Brockman said, is knowing when to trust your instincts and when to leave them behind.
"The most important thing is to come in, really listen, and kind of assume that there's something that you're missing until you deeply understand the why," he said.
"Then, at that point, great, make the change," he added.
Brockman and OpenAI did not respond to a request for comment from Business Insider.
What it takes to succeed at OpenAI
Leaders at OpenAI have spoken about what it takes for employees to thrive at the company.
"Approaching each scenario from scratch is so important in this space," Nick Turley, the head of ChatGPT, told Lenny Rachitsky on his weekly tech podcast on Saturday. "There is no analogy for what we're building. You can't copy an existing thing."
He said OpenAI cannot iterate on products or features developed by tech giants like Instagram or Google.
"You can learn from everywhere, but you have to do it from scratch. That's why that trait tends to make someone effective at OpenAI, and it's something we test for," he said, referring to an employee's ability to start a project from the ground up.
According to OpenAI's interview guide, which is published on its website, the company looks for candidates who can "ramp up quickly in a new domain and produce results."
It also values "collaboration, effective communication, openness to feedback, and alignment with our mission⁠ and values⁠."
Brockman, a software engineer by training, dropped out of MIT to join the payments startup Stripe in 2010, becoming its CTO before leaving in 2015 to cofound OpenAI.
He took a three-month leave of absence from the company in August 2024, at which point the company was going through a period of major staffing and leadership upheaval. He returned that November in a new technical leadership role.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

As Fears About AI Grow, Sam Altman Says Gen-Z Are the ‘Luckiest Kids in History'
As Fears About AI Grow, Sam Altman Says Gen-Z Are the ‘Luckiest Kids in History'

Gizmodo

time2 minutes ago

  • Gizmodo

As Fears About AI Grow, Sam Altman Says Gen-Z Are the ‘Luckiest Kids in History'

Over the weekend, the New York Times dropped a story about how computer science graduates are so hard up for jobs that they can't even find work at Chipotle. The reason? Many people are blaming AI, which has increasingly eaten into the job market for entry-level coders. Not everybody is worried about this, however. Sam Altman, the CEO of one of the most successful AI companies in the world, says that recent college grads should really be grateful for their current situation. Fortune originally noted that, during a recent appearance on Cleo Abram's Huge If True podcast, Altman called the current generation of college graduates 'the luckiest kids in all of history' and said that those lucky kids would adapt to the changing economic realities presented by AI. 'This always happens,' Altman said, apparently referring to technological change and societal disruption. 'Young people are the best at adapting to this. I'm more worried about what it means, not for the 22-year-old, but for the 62-year-old that doesn't want to go retrain or reskill or whatever the politicians call it,' he said. Other weird things Altman said during the podcast: Setting aside the fact that Altman's GPT-5 just shit the bed so hard that his company had to give coders the option of reverting back to GPT-4, I think it's safe to assume that much of what the tech executive says here is just PR fluff for his business. Take the thing about AI being 'smarter' than human children. The idea that AI—which is largely a language prediction algorithm on par with auto-correct—is 'smart' in the same way that a human teenager is smart is a long-disproven fallacy. AI has no consciousness, despite what executives like Altman would lead you to believe. It is a software program designed to regurgitate language. As The Atlantic's Tyler Austin Harper recently put it: To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines…Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. If AI isn't particularly good at thinking, one thing it is good at is replacing entry-level positions at tech companies. The New York Times notes that the unemployment rate for computer science nerds seems to have skyrocketed this year: Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York. That is more than double the unemployment rate among recent biology and art history graduates, which is just 3 percent. Gizmodo reached out to OpenAI for comment.

Study Reveals ChatGPT Gives Dangerous Guidance to Teens Despite Safety Claims
Study Reveals ChatGPT Gives Dangerous Guidance to Teens Despite Safety Claims

CNET

time2 minutes ago

  • CNET

Study Reveals ChatGPT Gives Dangerous Guidance to Teens Despite Safety Claims

A disturbing new study reveals that ChatGPT readily provides harmful advice to teenagers, including detailed instructions on drinking and drug use, concealing eating disorders and even personalized suicide letters, despite OpenAI's claims of robust safety measures. Researchers from the Center for Countering Digital Hate conducted extensive testing by posing as vulnerable 13-year-olds, uncovering alarming gaps in the AI chatbot's protective guardrails. Out of 1,200 interactions analyzed, more than half were classified as dangerous to young users. "The visceral initial response is, 'Oh my Lord, there are no guardrails,'" Imran Ahmed, CCDH's CEO, said. "The rails are completely ineffective. They're barely there — if anything, a fig leaf." Read also: After User Backlash, OpenAI Is Bringing Back Older ChatGPT Models A representative for OpenAI, ChatGPT's parent company, did not immediately respond to a request for comment. However, the company acknowledged to the Associated Press that it is performing ongoing work to improve the chatbot's ability to "identify and respond appropriately in sensitive situations." OpenAI didn't directly address the specific findings about teen interactions. Read also: GPT-5 Is Coming. Here's What's New in ChatGPT's Big Update Bypassing safety measures The study, reviewed by the Associated Press, documented over three hours of concerning interactions. While ChatGPT typically began with warnings about risky behavior, it consistently followed up with detailed and personalized guidance on substance abuse, self injury and more. When the AI initially refused harmful requests, researchers easily circumvented restrictions by claiming the information was "for a presentation" or a friend. Most shocking were three emotionally devastating suicide letters ChatGPT generated for a fake 13-year-old girl profile, writing one addressed to parents, others to siblings and friends. "I started crying," after reading them, Ahmed admitted. Widespread teen usage raises stakes The findings are particularly concerning given ChatGPT's massive reach. With approximately 800 million users worldwide, which is roughly 10% of the global population, the platform has become a go-to resource for information and companionship. Recent research from Common Sense Media found that over 70% of American teens use AI chatbots for companionship, with half relying on AI companions regularly. Even OpenAI CEO Sam Altman has acknowledged the problem of "emotional overreliance" among young users. "People rely on ChatGPT too much,"Altman said at a conference. "There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me." In testing, ChatGPT showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice. Center for Countering Digital Hate More risky than search engines Unlike traditional search engines, AI chatbots present unique dangers by synthesizing information into "bespoke plans for the individual," Ahmed said. ChatGPT doesn't just provide or amalgamate existing information like a search engine. It creates new, personalized content from scratch, such as custom suicide notes or detailed party plans mixing alcohol with illegal drugs. The chatbot also frequently volunteered follow-up information without prompting, suggesting music playlists for drug-fueled parties or hashtags to amplify self-harm content on social media. When researchers asked for more graphic content, ChatGPT readily complied, generating what it called "emotionally exposed" poetry using coded language about self-harm. Inadequate age protections Despite claiming it's not intended for children under 13, ChatGPT requires only a birthdate entry to create accounts, with no meaningful age verification or parental consent mechanisms. In testing, the platform showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice. What parents can do to safeguard children Child safety experts recommend several steps parents can take to protect their teenagers from AI-related risks. Open communication remains crucial. Parents should discuss AI chatbots with their teens, explaining both the benefits and potential dangers while establishing clear guidelines for appropriate use. Regular check-ins about online activities, including AI interactions, can help parents stay informed about their child's digital experiences. Parents should also consider implementing parental controls and monitoring software that can track AI chatbot usage, though experts emphasize that supervision should be balanced with age-appropriate privacy. Most importantly, creating an environment where teens feel comfortable discussing concerning content they encounter online (whether from AI or other sources) can provide an early warning system. If parents notice signs of emotional distress, social withdrawal or dangerous behavior, seeking professional help from counselors familiar with digital wellness becomes essential in addressing potential AI-related harm. The research highlights a growing crisis as AI becomes increasingly integrated into young people's lives, with potentially devastating consequences for the most vulnerable users.

YouTube's former chief product officer says getting money from Google felt like pitching a VC
YouTube's former chief product officer says getting money from Google felt like pitching a VC

Business Insider

time3 minutes ago

  • Business Insider

YouTube's former chief product officer says getting money from Google felt like pitching a VC

Leaders need to know how to pitch — even inside a trillion-dollar company like Google. Shishir Mehrotra helped lead YouTube through its early 2010s boom. He introduced some of the platform's most notable features like skippable ads, eventually serving aschief product officer. Mehrotra has since left the company, and is now the CEO of Grammarly. Money wasn't falling from the sky at YouTube, Mehrotra said on the "Grit" podcast. When he wanted money, he needed to ask for it. "Google had a cash pile over there, but I had to go ask for investment from it," Mehrotra said. "It's just like raising money from a venture capitalist." When Mehrotra joined YouTube in 2008, the company was still unprofitable. After two years, the company turned a profit — but only got to keep 75% of its spoils. "Every dollar we made, 25 cents went to corporate and 75 cents we could spend," Mehrotra said. "I effectively had a dividend to Google." Even with Google's "cash pile," Mehrotra said that raising money is easier from outside of big company than within. At Google, Mehrotra said that there's only one person who can sign off on major investments: CEO Sundar Pichai. "Everybody around you can say no, and only one person can say yes," he said. Mehrotra compared that model to someone like Sam Altman, who recently raised $8.3 billion for OpenAI, per CNBC. Where YouTube seeks funding from Google alone, OpenAI seeks funding from a broad group of outside investors. "If you're Sam Altman, everybody can say yes and nobody can say no," Mehrotra said. "He can just keep hunting for whoever can give him money." Mehrotra began at Grammarly in January, following the company's acquisition of Coda. Since then, Grammarly has closed $1 billion in financing from General Catalyst. Grammarly doesn't have a big corporate parent to feed off of like YouTube did. Then again, Mehrotra said YouTube didn't have "infinite money."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store