Latest news with #Sutskever


Time of India
2 days ago
- Business
- Time of India
A film about Sam Altman's OpenAI boardroom chaos is in in the works
A movie about Sam Altman 's wild OpenAI week is in the works, with the chaotic 2023 saga that saw the CEO fired and rehired within just five days now heading to the big screen under acclaimed director Luca Guadagnino for Amazon MGM Studios . The film, titled "Artificial," has Andrew Garfield circling the role of Altman, with Monica Barbaro and Yura Borisov being considered to play former CTO Mira Murati and co-founder Ilya Sutskever respectively, according to The Hollywood Reporter. Sutskever was instrumental in Altman's initial ouster. Simon Rich, known for his work on "Saturday Night Live," penned the screenplay and will produce alongside Heyday Films' David Heyman, who's behind the "Harry Potter" franchise and "Barbie." The comedic drama approach seems fitting given the almost absurd nature of the real events. Amazon is reportedly moving at breakneck speed, eyeing a summer production start with filming planned for San Francisco and Italy. The studio has already worked with Guadagnino on "Challengers" and the upcoming Julia Roberts film "After the Hunt." The corporate drama that wrote itself by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Τι είναι το ChatGPT για το οποίο μιλάνε όλοι; courses AI Undo The saga dates not too long ago when the OpenAI board back in the November of 2023 lost confidence in Altman's leadership, citing safety concerns and reports of problematic behavior. New details from an upcoming book reveal that board members grew alarmed after discovering issues like Altman allegedly owning the OpenAI Startup Fund personally, while Sutskever and Murati compiled evidence of what they viewed as toxic management practices. However, the firing spectacularly backfired. Within days, nearly all OpenAI employees, including Sutskever and Murati themselves, signed a letter demanding Altman's return. The board capitulated, reinstating him as CEO while the rebels who orchestrated his removal eventually left to start their own ventures. The timing couldn't be more perfect for an AI-themed comedy-drama, and the real-life drama had all the elements of a Silicon Valley thriller: corporate intrigue, power struggles, and a remarkable reversal of fortune that left the AI world spinning. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
2 days ago
- Business
- Time of India
A movie about Sam Altman's wild OpenAI week is in the works
A movie about 's wild week is in the works, with the chaotic 2023 saga that saw the CEO fired and rehired within just five days now heading to the big screen under acclaimed director for Amazon MGM Studios . Tired of too many ads? go ad free now The film, titled "Artificial," has circling the role of Altman, with Monica Barbaro ("A Complete Unknown") and Yura Borisov ("Anora") being considered to play former CTO Mira Murati and co-founder Ilya Sutskever respectively, according to The Hollywood Reporter. Sutskever was instrumental in Altman's initial ouster. Simon Rich, known for his work on "Saturday Night Live," penned the screenplay and will produce alongside Heyday Films' David Heyman, who's behind the "Harry Potter" franchise and "Barbie." The comedic drama approach seems fitting given the almost absurd nature of the real events. Amazon is reportedly moving at breakneck speed, eyeing a summer production start with filming planned for San Francisco and Italy. The studio has already worked with Guadagnino on "Challengers" and the upcoming Julia Roberts film "After the Hunt." The corporate drama that wrote itself The saga dates not too long ago when the OpenAI board back in the November of 2023 lost confidence in Altman's leadership, citing safety concerns and reports of problematic behavior. New details from an upcoming book reveal that board members grew alarmed after discovering issues like Altman allegedly owning the OpenAI Startup Fund personally, while Sutskever and Murati compiled evidence of what they viewed as toxic management practices. However, the firing spectacularly backfired. Within days, nearly all OpenAI employees, including Sutskever and Murati themselves, signed a letter demanding Altman's return. Tired of too many ads? go ad free now The board capitulated, reinstating him as CEO while the rebels who orchestrated his removal eventually left to start their own ventures. The timing couldn't be more perfect for an AI-themed comedy-drama, and the real-life drama had all the elements of a Silicon Valley thriller: corporate intrigue, power struggles, and a remarkable reversal of fortune that left the AI world spinning.


Time of India
27-05-2025
- Business
- Time of India
OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it
Former OpenAI chief scientist and co-founder Ilya Sutskever told his research team in 2023 that the company would need to build a protective bunker, often known as 'doomsday bunker,' before releasing artificial general intelligence (AGI), according to new revelations from an upcoming book about the AI company's internal turmoil. "We're definitely going to build a bunker before we release AGI," Sutskever declared during a 2023 meeting with OpenAI scientists, months before his departure from the company. When pressed about the seriousness of his proposal, he assured colleagues that bunker entry would be "optional." The startling disclosure comes from excerpts of "Empire of AI," a forthcoming book by former Wall Street Journal correspondent Karen Hao based on interviews with 90 current and former OpenAI employees. The book details the dramatic November 2023 boardroom coup that briefly ousted CEO Sam Altman , with Sutskever playing a central role in the failed takeover. Sutskever, who co-created the groundbreaking AlexNet in 2012 alongside AI pioneer Geoff Hinton, believed his fellow researchers would require protection once AGI was achieved. He reasoned that such powerful technology would inevitably become "an object of intense desire for governments globally." What made OpenAI co-founder want a 'doomsday bunker' Sutskever and others worried that CEO Altman's focus on commercial success was compromising the company's commitment to developing AI safely. These tensions were exacerbated by ChatGPT 's unexpected success, which unleashed a "funding gold rush" that safety-minded Sutskever could no longer control. "There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture," one researcher told Hao. "Literally a rapture." This apocalyptic mindset partially motivated Sutskever's participation in the board revolt against Altman. However, the coup collapsed within a week, leading to Altman's return and the eventual departure of Sutskever and other safety-focused researchers. The failed takeover, now called "The Blip" by insiders, left Altman more powerful than before while driving out many of OpenAI's safety experts who were aligned with Sutskever's cautious approach. Since leaving OpenAI, Sutskever has founded Safe Superintelligence Inc., though he has declined to comment on his previous bunker proposals. His departure represents a broader exodus of safety-focused researchers who felt the company had abandoned its original mission of developing AI that benefits humanity broadly, rather than pursuing rapid commercialization. The timing of AGI remains hotly debated across the industry. While Altman recently claimed AGI is possible with current hardware, Microsoft AI CEO Mustafa Suleyman disagrees, predicting it could take up to 10 years to achieve. Google leaders Sergey Brin and DeepMind CEO Demis Hassabis see AGI arriving around 2030. However, AI pioneer Geoffrey Hinton warns there's no consensus on what AGI actually means, calling it "a serious, though ill-defined, concept." Despite disagreements over definitions and timelines, most industry leaders now view AGI as an inevitability rather than a possibility. AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Yahoo
22-05-2025
- Business
- Yahoo
OpenAI's Top Scientist Wanted to "Build a Bunker Before We Release AGI"
OpenAI's former chief scientist, Ilya Sutskever, has long been preparing for artificial general intelligence (AGI), an ill-defined industry term for the point at which human intellect is outpaced by algorithms — and he's got some wild plans for when that day may come. In interviews with The Atlantic's Karen Hao, who is writing a book about the unsuccessful November 2023 ouster of CEO Sam Altman, people close to Sutskever said that he seemed mighty preoccupied with AGI. According to a researcher who heard the since-resigned company cofounder wax prolific about it during a summer 2023 meeting, an apocalyptic scenario seemed to be a foregone conclusion to Sutskever. "Once we all get into the bunker..." the chief scientist began. "I'm sorry," the researcher interrupted, "the bunker?" "We're definitely going to build a bunker before we release AGI," Sutskever said, matter-of-factly. "Of course, it's going to be optional whether you want to get into the bunker." The exchange highlights just how confident OpenAI's leadership was, and remains, in the technology that it believes it's building — even though others argue that we are nowhere near AGI and may never get there. As theatrical as that exchange sounds, two other people present for the exchange confirmed that OpenAI's resident AGI soothsayer — who, notably, claimed months before ChatGPT's 2022 release that he believes some AI models are "slightly conscious" — did indeed mention a bunker. "There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the first researcher told Hao. "Literally, a rapture." As others who spoke to the author for her forthcoming book "Empire of AI" noted, Sutskever's AGI obsession had taken on a novel tenor by summer 2023. Aside from his interest in building AGI, he had also become concerned about the way OpenAI was handling the technology it was gestating. That concern ultimately led the mad scientist, alongside several other members of the company's board, to oust CEO Sam Altman a few months later, and ultimately to his own departure. Though Sutskever led the coup, his resolve, according to sources that The Atlantic spoke to, began to crack once he realized OpenAI's rank-and-file were falling in line behind Altman. He eventually rescinded his opinion that the CEO was not fit to lead in what seems to have been an effort to save his skin — an effort that, in the end, turned out to be fruitless. Interestingly, Hao also learned that people inside OpenAI had a nickname for the failed coup d'etat: "The Blip." More on AGI: Sam Altman Says OpenAI Has Figured Out How to Build AGI
Yahoo
20-05-2025
- Science
- Yahoo
OpenAI ex-chief scientist planned for a doomsday bunker for the day when machines become smarter than man
Months before he left OpenAI, Sutskever believed his AI researchers needed to be assured protection once they ultimately achieved their goal of creating artificial general intelligence, or AGI. 'Of course, it's going to be optional whether you want to get into the bunker,' he told his team If there is one thing that Ilya Sutskever knows, it is the opportunities—and risks—that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. It remains the holy grail for researchers to this day—a chance for mankind at last to give birth to its own sentient lifeform, even if it's only silicon and not carbon based. But while the rest of humanity debates the pros and cons of a technology even experts struggle to truly grasp, Sutskever was already planning for the day his team would finish first in the race to develop AGI. According to excerpts published by The Atlantic from a new book called Empire of AI, part of those plans included a doomsday shelter for OpenAI researchers. 'We're definitely going to build a bunker before we release AGI,' Sutskever told his team in 2023, months before he would ultimately leave the company. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. 'Of course, it's going to be optional whether you want to get into the bunker,' he assured fellow OpenAI scientists, according to people present at the time. Written by former Wall Street Journal correspondent Karen Hao and based on dozens of interviews with some 90 current and former company employees either directly involved or with knowledge of events, Empire of AI reveals new information regarding the brief but spectacular coup led to oust of Sam Altman as CEO in November 2023 and what it meant for the company behind ChatGPT. The book pins much of the responsibility on Sutskever and chief technology officer Mira Murati, whose concerns over Altman's alleged fixation on paying the bills at the cost of transparency were at the heart of the non-profit board sacking him. The duo apparently acted too slowly, failing to consolidate their power and win key executives over to their cause. Within a week Altman was back in the saddle and soon almost the entire board of the non-profit would be replaced. Sutskever and Murati both left within the span of less than a year. Neither OpenAI nor Sutskever responded to a request for comment from Fortune out of normal hours. Ms. Murati could not be reached. Sutskever knows better than most what the awesome capabilities of AI are. Together with renowned researcher and mentor Geoff Hinton, he was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control and that played into rival Sam Altman's hands, the excerpts state. Much of this has been reported on by Fortune, as well. Ultimately it lead to the fateful confrontation with Altman and Sutskever's subsequent departure, along with many likeminded OpenAI safety experts aligned with him who worried the company no longer sufficiently cared about aligning AI with the interests of mankind at large. 'There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,' said one researcher Hao quotes, who was present at the time Sustekever revealed his plans for a bunker. 'Literally a rapture.' This story was originally featured on