logo
#

Latest news with #EmpireofAI

Science Fiction 'Hugely Shapes OpenAI's Imagination And Where They're Going,' Author Karen Hao Says
Science Fiction 'Hugely Shapes OpenAI's Imagination And Where They're Going,' Author Karen Hao Says

Yahoo

time2 days ago

  • Politics
  • Yahoo

Science Fiction 'Hugely Shapes OpenAI's Imagination And Where They're Going,' Author Karen Hao Says

OpenAI co-founder and CEO Sam Altman is 'deeply obsessed' with the 2013 film Her, according to technology journalist Karen Hao. 'Science fiction hugely shapes OpenAI's imagination and where they're going,' Hao said during a discussion Tuesday of her bestselling new book, Empire of AI, hosted by the Pulitzer Center in New York. More from Deadline Luca Guadagnino Eyes 'Artificial' At Amazon MGM As Next Movie With Andrew Garfield, Monica Barbaro And 'Anora' Actor Yura Borisov Circling Sky Boss Dana Strong Raises Artificial Intelligence Copyright Concerns: "I Can't Fathom How A Small Producer Keeps Up" Tastes Great, Less Filling? Report On Meta Plan For Cheaper, Fully AI-Made Ads Boosts Tech Giant's Stock As Media Agency Shares Slump Altman 'has evoked throughout OpenAI's history his idea that Her is the thing that OpenAI should building,' the author said of the film directed by Spike Jonze and starring Joaquin Phoenix and the voice of Scarlett Johansson. 'Artificial generative intelligence doesn't have a definition, and so they actually use pop culture as the way to describe and put a shape to the nebulous thing that they're trying to achieve.' An 'under-talked-about' current in the world of AI, Hao said, is the 'deep, intertwined relationship between science fiction and pop culture portrayals of these things and, ultimately, the technologies that we get. Because a lot of these people are sci-fi nerds and they want these things, and then it shapes their beliefs, their ideas of what they want to do.' Hao was interviewed by Marina Walker Guevara, the executive editor at the Pulitzer Center who previously oversaw the Panama Papers and Paradise Papers investigative journalism projects. Empire of AI, which was published two weeks ago, is one of two new books to profile Altman, along with Keach Hagey's The Optimist. Hao, by her own description, is delivering a 'critique' of the arms race that AI has become. Much of her talk focused on her reporting around the world documenting the harmful effects of AI, including communities whose water supply has been compromised by data center construction. Low-paid workers in the global south, she writes, must sift through reams of objectionable content in order to train large-language models. Her book also traces the development of OpenAI, which began as a well-intentioned non-profit co-founded by Elon Musk before turning into a commercial entity worth billions of dollars and funded by Microsoft. A central question during the discussion was whether there are ways to push back against the immense wealth and dominance of Silicon Valley. 'Every community that I spoke to, regardless of that there were artists having their intellectual property taken or water activists who were having the fresh water taken, they were all saying the same exact thing. When they encounter the empire, they feel this incredible loss of agency, a profound loss of the agency to self-determine their people,' Hao said. If that loss is permitted, she argued, 'democracy cannot survive, because democracy is based on the fact that people feel that agency and they' willing to go to the booth to vote, because they know that it will matter. And so the theme that I find hopeful is that there are so many movements that I encountered around the world that are now trying to reclaim that the agency.' She cited a protest in Chile, where activists managed to hold tech companies to account for the way their AI projects were harming the water supply. 'If we we allow this to happen 100,000-times-fold, if we really amplify and support this work, that is how we can get this trajectory of air development to turn from a more imperial approach, and top-down, 'we just say whatever we want and it goes' to a more broad-based, democratically beneficial version of AI,' Hao said. Best of Deadline 2025 TV Series Renewals: Photo Gallery 2025-26 Awards Season Calendar: Dates For Tonys, Emmys, Oscars & More Everything We Know About 'Nobody Wants This' Season 2 So Far

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it
OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Time of India

time27-05-2025

  • Business
  • Time of India

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Former OpenAI chief scientist and co-founder Ilya Sutskever told his research team in 2023 that the company would need to build a protective bunker, often known as 'doomsday bunker,' before releasing artificial general intelligence (AGI), according to new revelations from an upcoming book about the AI company's internal turmoil. "We're definitely going to build a bunker before we release AGI," Sutskever declared during a 2023 meeting with OpenAI scientists, months before his departure from the company. When pressed about the seriousness of his proposal, he assured colleagues that bunker entry would be "optional." The startling disclosure comes from excerpts of "Empire of AI," a forthcoming book by former Wall Street Journal correspondent Karen Hao based on interviews with 90 current and former OpenAI employees. The book details the dramatic November 2023 boardroom coup that briefly ousted CEO Sam Altman , with Sutskever playing a central role in the failed takeover. Sutskever, who co-created the groundbreaking AlexNet in 2012 alongside AI pioneer Geoff Hinton, believed his fellow researchers would require protection once AGI was achieved. He reasoned that such powerful technology would inevitably become "an object of intense desire for governments globally." What made OpenAI co-founder want a 'doomsday bunker' Sutskever and others worried that CEO Altman's focus on commercial success was compromising the company's commitment to developing AI safely. These tensions were exacerbated by ChatGPT 's unexpected success, which unleashed a "funding gold rush" that safety-minded Sutskever could no longer control. "There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture," one researcher told Hao. "Literally a rapture." This apocalyptic mindset partially motivated Sutskever's participation in the board revolt against Altman. However, the coup collapsed within a week, leading to Altman's return and the eventual departure of Sutskever and other safety-focused researchers. The failed takeover, now called "The Blip" by insiders, left Altman more powerful than before while driving out many of OpenAI's safety experts who were aligned with Sutskever's cautious approach. Since leaving OpenAI, Sutskever has founded Safe Superintelligence Inc., though he has declined to comment on his previous bunker proposals. His departure represents a broader exodus of safety-focused researchers who felt the company had abandoned its original mission of developing AI that benefits humanity broadly, rather than pursuing rapid commercialization. The timing of AGI remains hotly debated across the industry. While Altman recently claimed AGI is possible with current hardware, Microsoft AI CEO Mustafa Suleyman disagrees, predicting it could take up to 10 years to achieve. Google leaders Sergey Brin and DeepMind CEO Demis Hassabis see AGI arriving around 2030. However, AI pioneer Geoffrey Hinton warns there's no consensus on what AGI actually means, calling it "a serious, though ill-defined, concept." Despite disagreements over definitions and timelines, most industry leaders now view AGI as an inevitability rather than a possibility. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Hey ChatGPT, which one of these versions truly is the real Sam Altman?
Hey ChatGPT, which one of these versions truly is the real Sam Altman?

Business Standard

time25-05-2025

  • Business
  • Business Standard

Hey ChatGPT, which one of these versions truly is the real Sam Altman?

If the aim is not, in the first place, to help the world, but instead to get bigger - better chips, more data, smarter code - then our problems might just get bigger too NYT By Tim Wu EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao Published by Penguin Press 482 pages $32 THE OPTIMIST: Sam Altman, OpenAI, and the Race to Invent the Future by Keach Hagey Published by 367 pages $31.99 The 'paper clip problem' is a well??'known ethics thought experiment. It imagines a superintelligent AI charged with the seemingly harmless goal of making as many paper clips as possible. Trouble is, as the philosopher Nick Bostrom put it in 2003, without common-sense limits it might transform 'first all of earth and then increasing portions of space into paper clip manufacturing facilities.' The tale has long served as a warning about objectives pursued too literally. Two new books that orbit the entrepreneur Sam Altman and the firm he co-founded, OpenAI, suggest we may already be living with a version of the problem. In Empire of AI, journalist Karen Hao argues that the pursuit of an artificial superintelligence has become its own figurative paper clip factory, devouring too much energy, minerals and human labour. The Optimist, by the Wall Street Journal reporter Keach Hagey, leaves readers suspecting that the earnest and seemingly innocuous paper clip maker who ends up running the world for his own ends could be Altman himself. Hao portrays OpenAI and other companies that make up the fast??'growing AI sector as a 'modern-day colonial world order.' Much like the European powers of the 18th and 19th centuries, they 'seize and extract precious resources to feed their vision of artificial intelligence.' In a corrective to tech journalism that rarely leaves Silicon Valley, Hao ranges well beyond the Bay Area with extensive fieldwork in Kenya, Colombia and Chile. The Optimist is concentrated on Altman's life and times. Born in Chicago to progressive parents named Connie and Jerry, Altman was heavily influenced by their do-gooder spirit. His relentlessly upbeat manner and genuine technical skill made him a perfect fit for Silicon Valley. The arc of Altman's life also follows a classic script. He drops out of Stanford to launch a start??'up that fizzles, but the effort brings him to the attention of Paul Graham, the co-founder of Y Combinator, an influential tech incubator that launched companies like Airbnb and Dropbox. By age 28, Altman has risen to succeed Graham as the organisation's president, setting the stage for his leadership in the AI revolution. As Hagey makes clear, success in this context is all about the way you use the people you know. During the 2010s Altman joined a group of Silicon Valley investors determined to recover the grand ambitions of earlier tech eras. They sought to return to outer space, unlock nuclear fusion, achieve human-level AI and even defeat death itself. The investor Peter Thiel was a major influence, but Altman's most important collaborator was Elon Musk. The early??' 2010s Musk who appears in both books is almost unrecognisable to observers who now associate him with black MAGA hats and chain-saw antics. This Musk, the builder of Tesla and SpaceX, believes that creating superintelligent computer systems is 'summoning the demon.' He becomes obsessed with the idea that Google will soon develop a true artificial intelligence and allow it to become a force for evil. Altman mirrors his anxieties and persuades him to bankroll a more idealistic rival. He pitched a 'Manhattan Project for AI,' a nonprofit to develop a good AI in order to save humanity from its evil twin. Musk guaranteed $1 billion and even supplied the name OpenAI. Hagey's book, written with Altman's cooperation, is no hagiography. The Optimist lets the reader see how thoroughly Altman outfoxed his patron. It's striking that, despite providing much of the initial capital and credibility, Musk ends up with almost nothing to show for his investment. Hao's 2020 profile of OpenAI, published in the MIT Technology Review, was unflattering and the company declined to cooperate with her for her book. She wants to make its negative spillover effects evident. Hao does an admirable job of telling the stories of workers in Nairobi who earn 'starvation wages to filter out violence and hate speech' from ChatGPT, and of visits to communities in Chile where data centres siphon prodigious amounts of water and electricity to run complex hardware. Altman recently told the statistician Nate Silver that if we achieve human-level AI, 'poverty really does just end.' But motives matter. The efficiencies of the cotton gin saved on labour but made slavery even more lucrative. If the aim is not, in the first place, to help the world, but instead to get bigger — better chips, more data, smarter code — then our problems might just get bigger too. Note: The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to AI systems. OpenAI and Microsoft have denied those claims. The reviewer is a law professor at Columbia University ©2025 The New York Times News Service

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store