logo
Sam Altman just hired a CEO of Applications to help him grow his AI empire

Sam Altman just hired a CEO of Applications to help him grow his AI empire

Instacart chair and CEO Fidji Simo will join OpenAI as its new CEO of Applications, the ChatGPT maker's CEO, Sam Altman, said on Thursday.
"I'll remain CEO of OpenAI, but in this new configuration I'll be able to increase my focus on research, compute, and safety," Altman wrote in an X post announcing Simo's hiring on Thursday morning.
Altman announced Simo's hiring in a message to employees earlier on Wednesday. OpenAI published Altman's message as a blog post on the same day.
"Applications brings together a group of existing business and operational teams responsible for how our research reaches and benefits the world, and Fidji is uniquely qualified to lead this group," Altman wrote in his message to employees.
Altman said in his message that Simo "has already contributed a great deal to our company" since joining OpenAI's board in March 2024. He added that Simo will start work at OpenAI later this year and "will transition from her role at Instacart over the next few months."
In her new role, Simo will report directly to Altman.
"Fidji is exceptional; we have worked together on OpenAI for the past year and I have observed her deep commitment to our mission," Altman wrote on X on Thursday.
"I cannot imagine a better new team member to help us scale the next 10x (or 100x, let's see)," Altman added.
When approached for comment, a spokesperson for OpenAI referred Business Insider to its blog post.
so excited that @fidjissimo is joining openai in a new role: ceo of applications, reporting to me.
i'll remain ceo of openai, but in this new configuration i'll be able to increase my focus on research, compute, and safety.
these are critical as we approach superintelligence.
— Sam Altman (@sama) May 8, 2025
Simo started her career at eBay before moving to Meta, where she oversaw Facebook's app and advertising products.
She joined Instacart as a board member in January 2021 and became its CEO in August 2021.
On Thursday, Simo wrote in an X post that she will continue to be Instacart's CEO for the next few months, and will still be the chair of its board after she joins OpenAI proper.
So excited to be joining @openai and contributing to its mission. Thank you @sama for the opportunity- it will be such a privilege to work with such a talented team on one of the most important and ambitious endeavors in history.
I'll remain CEO of @Instacart for the next few… https://t.co/hDV3QhQrxj
— Fidji Simo (@fidjissimo) May 8, 2025
"I'm so grateful to my Instacart team for the amazing ride we had together, and I look forward to supporting the next CEO during the transition," Simo added.
Simo did not immediately respond to a request for comment from BI.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Here's how Uber's product chief uses AI at work — and one tool he's going to use next
Here's how Uber's product chief uses AI at work — and one tool he's going to use next

Business Insider

timean hour ago

  • Business Insider

Here's how Uber's product chief uses AI at work — and one tool he's going to use next

Uber's chief product officer has one AI tool on his to-do list. In an episode of "Lenny's Podcast" released on Sunday, Uber's product chief, Sachin Kansal, shared two ways he is using AI for his everyday tasks at the ride-hailing giant and how he plans to add NotebookLM to his AI suite. Kansal joined Uber eight years ago as its director of product management after working at cybersecurity and taxi startups. He became Uber's product chief last year. Kansal said he uses OpenAI's ChatGPT and Google's Gemini to summarize long reports. "Some of these reports, they're 50 to 100 pages long," he said. "I will never have the time to read them." He said he uses the chatbots to acquaint himself with what's happening and how riders are feeling in Uber's various markets, such as South Africa, Brazil, and Korea. The CPO said his second use case is treating AI like a research assistant, because some large language models now offer a deep research feature. Kansal gave a recent example of when his team was thinking about a new driver feature. He asked ChatGPT's deep research mode about what drivers may think of the add-on. "It's an amazing research assistant and it's absolutely a starting point for a brainstorm with my team with some really, really good ideas," the CPO said. In April, Uber's CEO, Dara Khosrowshahi, said that not enough of his 30,000-odd employees are using AI. He said learning to work with AI agents to code is "going to be an absolute necessity at Uber within a year." Uber did not immediately respond to a request for comment from Business Insider. Kansal's next tool: NotebookLM On the podcast, Kansal also highlighted NotebookLM, Google Lab's research and note-taking tool, which is especially helpful for interacting with documents. He said he doesn't use the product yet, but wants to. "I know a lot of people who have started using it, and that is the next thing that I'm going to use," he said. "Just to be able to build an audio podcast based on a bunch of information that you can consume. I think that's awesome," he added. Kansal was referring to the "Audio Overview" feature, which summarizes uploaded content in the form of two AIs having a voice discussion. NotebookLM was launched in mid-2023 and has quickly become a must-have tool for researchers and AI enthusiasts. Andrej Karpathy, Tesla's former director of AI and OpenAI cofounder, is among those who have praised the tool and its podcast feature. "It's possible that NotebookLM podcast episode generation is touching on a whole new territory of highly compelling LLM product formats," he said in a September post on X. "Feels reminiscent of ChatGPT. Maybe I'm overreacting."

Ads Ruined Social Media. Now They're Coming to AI.
Ads Ruined Social Media. Now They're Coming to AI.

Bloomberg

time3 hours ago

  • Bloomberg

Ads Ruined Social Media. Now They're Coming to AI.

Chatbots might hallucinate and sprinkle too much flattery on their users — 'That's a fascinating question!' one recently told me — but at least the subscription model that underpins them is healthy for our wellbeing. Many Americans pay about $20 a month to use the premium versions of OpenAI's ChatGPT, Google's Gemini Pro or Anthropic's Claude, and the result is that the products are designed to provide maximum utility. Don't expect this status quo to last. Subscription revenue has a limit, and Anthropic's new $200-a-month 'Max' tier suggests even the most popular models are under pressure to find new revenue streams.

Hey chatbot, is this true? AI 'factchecks' sow misinformation
Hey chatbot, is this true? AI 'factchecks' sow misinformation

Yahoo

time5 hours ago

  • Yahoo

Hey chatbot, is this true? AI 'factchecks' sow misinformation

As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store