logo
#

Latest news with #TheBlip

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it
OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Time of India

time27-05-2025

  • Business
  • Time of India

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Former OpenAI chief scientist and co-founder Ilya Sutskever told his research team in 2023 that the company would need to build a protective bunker, often known as 'doomsday bunker,' before releasing artificial general intelligence (AGI), according to new revelations from an upcoming book about the AI company's internal turmoil. "We're definitely going to build a bunker before we release AGI," Sutskever declared during a 2023 meeting with OpenAI scientists, months before his departure from the company. When pressed about the seriousness of his proposal, he assured colleagues that bunker entry would be "optional." The startling disclosure comes from excerpts of "Empire of AI," a forthcoming book by former Wall Street Journal correspondent Karen Hao based on interviews with 90 current and former OpenAI employees. The book details the dramatic November 2023 boardroom coup that briefly ousted CEO Sam Altman , with Sutskever playing a central role in the failed takeover. Sutskever, who co-created the groundbreaking AlexNet in 2012 alongside AI pioneer Geoff Hinton, believed his fellow researchers would require protection once AGI was achieved. He reasoned that such powerful technology would inevitably become "an object of intense desire for governments globally." What made OpenAI co-founder want a 'doomsday bunker' Sutskever and others worried that CEO Altman's focus on commercial success was compromising the company's commitment to developing AI safely. These tensions were exacerbated by ChatGPT 's unexpected success, which unleashed a "funding gold rush" that safety-minded Sutskever could no longer control. "There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture," one researcher told Hao. "Literally a rapture." This apocalyptic mindset partially motivated Sutskever's participation in the board revolt against Altman. However, the coup collapsed within a week, leading to Altman's return and the eventual departure of Sutskever and other safety-focused researchers. The failed takeover, now called "The Blip" by insiders, left Altman more powerful than before while driving out many of OpenAI's safety experts who were aligned with Sutskever's cautious approach. Since leaving OpenAI, Sutskever has founded Safe Superintelligence Inc., though he has declined to comment on his previous bunker proposals. His departure represents a broader exodus of safety-focused researchers who felt the company had abandoned its original mission of developing AI that benefits humanity broadly, rather than pursuing rapid commercialization. The timing of AGI remains hotly debated across the industry. While Altman recently claimed AGI is possible with current hardware, Microsoft AI CEO Mustafa Suleyman disagrees, predicting it could take up to 10 years to achieve. Google leaders Sergey Brin and DeepMind CEO Demis Hassabis see AGI arriving around 2030. However, AI pioneer Geoffrey Hinton warns there's no consensus on what AGI actually means, calling it "a serious, though ill-defined, concept." Despite disagreements over definitions and timelines, most industry leaders now view AGI as an inevitability rather than a possibility. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store