Latest news with #'sOpenAI


NDTV
20-05-2025
- Science
- NDTV
OpenAI Co-Founder's Doomsday Bunker Plan for AGI Apocalypse Revealed
OpenAI co-founder Ilya Sutskever once proposed building a doomsday bunker that could protect the company's top researchers in case of an end-of-the-world "rapture" triggered by the release of a new form of artificial intelligence (AI), popularly referred to as Artificial General Intelligence (AGI), capable of surpassing the cognitive abilities of humans. The revelation has been made by Karen Hao in her upcoming book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Mr Sutskever, long regarded as the brains behind ChatGPT, made the comment during a meeting with key scientists at the company in 2023. "Once we all get into the bunker..." said Mr Sutskever, only to be interrupted by a confused colleague: "I'm sorry, the bunker?" To which he replied: "We're definitely going to build a bunker before we release AGI." As per the book's excerpts, published in The Atlantic, it wasn't the first and only time that Mr Sutskever broached the topic. Two other sources told Ms Hao that Mr Sutskever regularly referenced the bunker in internal discussions. Society not ready for AGI This is not the first instance when a top-level executive, working to build AI models, has sounded the alarm about the future of AGI. Google DeepMind CEO Demis Hassabis has already warned that society is not ready for AGI. "I think we are on the cusp of that. Maybe we are five to 10 years out. Some people say shorter, I wouldn't be surprised," said Mr Hassabis when quizzed about the timeline of AGI becoming a reality. "It's a sort of like probability distribution. But it's coming, either way it's coming very soon and I'm not sure society's quite ready for that yet. And we need to think that through and also think about these issues that I talked about earlier, to do with the controllability of these systems and also the access to these systems and ensuring that all goes well," he added. The 2024 Nobel Prize in Chemistry winner has previously called for the establishment of a UN-like umbrella organisation to oversee AGI's development. What is AGI? AGI takes AI a step further. While AI is task-specific, AGI aims to possess intelligence that can be applied across a wide range of tasks, similar to human intelligence. In essence, AGI would be a machine with the ability to understand, learn, and apply knowledge in diverse domains, much like a human being.

Business Insider
20-05-2025
- Business
- Business Insider
OpenAI's growing pains
This is an excerpt from " Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao. The book is based on interviews with around 260 people and an extensive trove of correspondence and documents. Any quoted emails, documents, or Slack messages come from copies or screenshots of those documents and correspondences or are exactly as they appear in lawsuits. The author reached out to all of the key figures and companies that are described in this book to seek interviews and comment. OpenAI and Sam Altman chose not to cooperate. In November 2022, rumors began to spread within OpenAI that its rival Anthropic was testing — and would soon release — a new chatbot. If it didn't launch first, OpenAI risked losing its leading position, which could deliver a big hit to morale for employees who had worked long and tough hours to retain that dominance. Anthropic had not in fact been planning any imminent releases. But for OpenAI executives, the rumors were enough to trigger a decision: The company wouldn't wait to ready GPT-4 into a chatbot; it would release John Schulman 's chat-enabled GPT-3.5 model with the Superassistant team's brand-new chat interface in two weeks, right after Thanksgiving. No one truly fathomed the societal phase shift they were about to unleash. They expected the chatbot to be a flash in the pan. The night before the release, they placed bets on how many users might try the tool by the end of the weekend. Some people guessed a few thousand. Others guessed tens of thousands. To be safe, the infrastructure team provisioned enough server capacity for 100,000 users. On Wednesday, November 30, most employees didn't even realize that the launch had happened. But the following day, the number of users began to surge. The instant runaway success of ChatGPT was beyond what anyone at OpenAI had dreamed of. It would leave the company's engineers and researchers completely miffed even years later. GPT-3.5 hadn't been that much of a capability improvement over GPT-3, which had already been out for two years. And GPT-3.5 had already been available to developers. OpenAI CEO Sam Altman later said that he'd believed ChatGPT would be popular but by something like "one order of magnitude less." "It was shocking that people liked it," a former employee remembers. "To all of us, they'd downgraded the thing we'd been using internally and launched it." Within five days, OpenAI cofounder Greg Brockman tweeted that ChatGPT had crossed one million users. Within two months, it had reached 100 million, becoming what was then the fastest-growing consumer app in history. ChatGPT catapulted OpenAI from a hot startup well-known within the tech industry into a household name overnight. At the same time, it was this very blockbuster success that would place extraordinary strain on the company. Over the course of a year, it would polarize its factions further and wind up the stress and tension within the organization to an explosive level. By then, the company had just 300 employees. With every team stretched dangerously thin, managers begged Altman for more head count. There was no shortage of candidates. After ChatGPT, the number of applicants clamoring to join the rocket ship had rapidly multiplied. But Altman worried about what would happen to company culture and mission alignment if the company scaled up its staff too quickly. He believed firmly in maintaining a small staff and high talent density. "We are now in a position where it's tempting to let the organization grow extremely large," he had written in his 2020 vision memo, in reference to Microsoft's investment. "We should try very hard to resist this — what has worked for us so far is being small, focused, high-trust, low-bullshit, and intense. The overhead of too many people and too much bureaucracy can easily kill great ideas or result in sclerosis." OpenAI is one of the best places I've ever worked but also probably one of the worst. He was now repeating this to executives in late 2022, emphasizing during head count discussions the need to keep the company lean and the talent bar high, and add no more than 100 or so hires. Other executives balked. At the rate that their teams were burning out, many saw the need for something closer to around 500 or even more new people. Over several weeks, the executive team finally compromised on a number somewhere in the middle, between 250 and 300. The cap didn't hold. By summer, there were as many as 30, even 50, people joining OpenAI each week, including more recruiters to scale up hiring even faster. By fall, the company had blown well past its own self-imposed quota. The sudden growth spurt indeed changed company culture. A recruiter wrote a manifesto about how the pressure to hire so quickly was forcing his team to lower the quality bar for talent. "If you want to build Meta, you're doing a great job," he said in a pointed jab at Altman, alluding to the very fears that the CEO had warned about. The rapid expansion was also leading to an uptick in firings. During his onboarding, one manager was told to swiftly document and report any underperforming members of his team, only to be let go himself sometime later. Terminations were rarely communicated to the rest of the company. People routinely discovered that colleagues had been fired only by noticing when a Slack account grayed out from being deactivated. They began calling it "getting disappeared." To new hires, fully bought into the idea that they were joining a fast-moving, money-making startup, the tumult felt like a particularly chaotic, at times brutal, manifestation of standard corporate problems: poor management, confusing priorities, the coldhearted ruthlessness of a capitalistic company willing to treat its employees as disposable. "There was a huge lack of psychological safety," says a former employee who joined during this era. Many people coming aboard were simply holding on for dear life until their one-year mark to get access to the first share of their equity. One significant upside: They still felt their colleagues were among the highest caliber in the tech industry, which, combined with the seemingly boundless resources and unparalleled global impact, could spark a feeling of magic difficult to find in the rest of the industry when things actually aligned. "OpenAI is one of the best places I've ever worked but also probably one of the worst," the former employee says. Sometimes there isn't a plan as much as there is just chaos. For some employees who remembered the scrappy early days of OpenAI as a tight-knit, mission-driven nonprofit, its dramatic transformation into a big, faceless corporation was far more shocking and emotional. Gone was the organization as they'd known it; in its place was something unrecognizable. "OpenAI is Burning Man," Rob Mallery, a former recruiter, says, referring to how the desert art festival scaled to the point that it lost touch with its original spirit. "I know it meant a lot more to the people who were there at the beginning than it does to everyone now." In those early years, the team had set up a Slack channel called #explainlikeimfive that allowed employees to submit anonymous questions about technical topics. With the company pushing 600 people, the channel also turned into a place for airing anonymous grievances. In mid-2023, an employee posted that the company was hiring too many people not aligned with the mission or passionate about building AGI. Another person responded: They knew OpenAI was going downhill once it started hiring people who could look you in the eye. As OpenAI was rapidly professionalizing and gaining more exposure and scrutiny, incoherence at the top was becoming more consequential. The company was no longer just the Applied and Research divisions. Now there were several public-facing departments: In addition to the communications team, a legal team was writing legal opinions and dealing with a growing number of lawsuits. The policy team was stretching out across continents. Increasingly, OpenAI needed to communicate with one narrative and voice to its constituents, and it needed to determine its positions to articulate them. But on numerous occasions, the lack of strategic clarity was leading to confused public messaging. At the end of 2023, The New York Times would sue OpenAI and Microsoft for copyright infringement for training on millions of its articles. OpenAI's response in early January, written by the legal team, delivered an unusually feisty hit back, accusing the Times of "intentionally manipulating our models" to generate evidence for its argument. That same week, OpenAI's policy team delivered a submission to the UK House of Lords communications and digital select committee, saying that it would be "impossible" for OpenAI to train its cutting-edge models without copyrighted materials. After the media zeroed in on the word impossible, OpenAI hastily walked away from the language. "There's just so much confusion all the time," says an employee in a public-facing department. While some of that reflects the typical growing pains of startups, OpenAI's profile and reach have well outpaced the relatively early stage of the company, the employee adds. "I don't know if there is a strategic priority in the C suite. I honestly think people just make their own decisions. And then suddenly it starts to look like a strategic decision but it's actually just an accident. Sometimes there isn't a plan as much as there is just chaos." Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She is the author of "Empire of AI." Adapted from " EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2025 by Karen Hao.


New York Post
19-05-2025
- Business
- New York Post
OpenAI co-founder wanted to build doomsday bunker to protect company scientists from ‘rapture': book
The co-founder of ChatGPT maker OpenAI proposed building a doomsday bunker that would house the company's top researchers in case of a 'rapture' triggered by the release of a new form of artificial intelligence that could surpass the cognitive abilities of humans, according to a new book. Ilya Sutskever, the man credited with being the brains behind ChatGPT, convened a meeting with key scientists at OpenAI in the summer of 2023 during which he said: 'Once we all get into the bunker…' A confused researcher interrupted him. 'I'm sorry,' the researcher asked, 'the bunker?' 4 Ilya Sutskever, co-founder of OpenAI, reportedly told top scientists at the company that they would need to enter a bunker after the rollout of artificial general intelligence. REUTERS 'We're definitely going to build a bunker before we release AGI,' Sutskever replied, according to an attendee. The plan, he explained, would be to protect OpenAI's core scientists from what he anticipated could be geopolitical chaos or violent competition between world powers once AGI — an artificial intelligence that exceeds human capabilities — is released. 'Of course,' he added, 'it's going to be optional whether you want to get into the bunker.' The exchange was first reported by Karen Hao, author of the upcoming book 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI.' An essay adapted from the book was published by The Atlantic. The bunker comment by Sutskever wasn't a one-off. Two other sources told Hao that Sutskever had regularly referenced the bunker in internal discussions. 4 Sutskever reportedly proposed building a doomsday bunker that would house the company's top researchers in case of a 'rapture' triggered by the release of AGI. The image above is a stock photo. Ida – One OpenAI researcher went so far as to say that 'there is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture. Literally, a rapture.' Though Sutskever declined to comment on the matter, the idea of a secure refuge for scientists developing AGI underscores the extraordinary anxieties gripping some of the minds behind the most powerful technology in the world. Sutskever has long been seen as a kind of mystic within OpenAI, known for discussing AI in moral and even metaphysical terms, according to the author. At the same time, he's also one of the most technically gifted minds behind ChatGPT and other large language models that have propelled the company into global prominence. 4 Sutskever, who left OpenAI after helping to depose of Sam Altman as CEO before Altman returned to the position, is credited with being the brains behind ChatGPT. AP In recent years, Sutskever had begun splitting his time between accelerating AI capabilities and promoting AI safety, according to colleagues. The idea of AGI triggering civilizational upheaval isn't isolated to Sutskever. In May 2023, OpenAI CEO Sam Altman co-signed a public letter warning that AI technologies could pose an 'extinction risk' to humanity. But while the letter sought to shape regulatory discussions, the bunker talk suggests deeper, more personal fears among OpenAI's leadership. The tension between those fears and OpenAI's aggressive commercial ambitions came to a head later in 2023 when Sutskever, along with then-Chief Technology Officer Mira Murati, helped orchestrate a brief boardroom coup that ousted Altman from the company. Central to their concerns was the belief that Altman was sidestepping internal safety protocols and consolidating too much control over the company's future, sources told Hao. Sutskever, once a firm believer in OpenAI's original mission to develop AGI for the benefit of humanity, had reportedly grown increasingly disillusioned. He and Murati both told board members they no longer trusted Altman to responsibly guide the organization to its ultimate goal. 4 In 2023, Sutskever and then-Chief Technology Officer Mira Murati helped orchestrate a brief boardroom coup that ousted CEO Sam Altman (above) from the company. REUTERS 'I don't think Sam is the guy who should have the finger on the button for AGI,' Sutskever said, according to notes reviewed by Hao. The board's decision to remove Altman was short-lived. Within days, mounting pressure from investors, employees, and Microsoft led to his reinstatement. Both Sutskever and Murati ultimately left the company. The proposed bunker — while never formally announced or planned — has come to symbolize the extremity of belief among AI insiders. It captures the magnitude of what OpenAI's own leaders fear their technology could unleash, and the lengths to which some were prepared to go in anticipation of what they saw as a transformative, possibly cataclysmic, new era. The Post has sought comment from OpenAI and Sutskever.