logo
#

Latest news with #RacetoInventtheFuture

Saint, Satan, Sam: Chat about the ChatGPT Man
Saint, Satan, Sam: Chat about the ChatGPT Man

Indian Express

time03-08-2025

  • Indian Express

Saint, Satan, Sam: Chat about the ChatGPT Man

For many people, AI (artificial intelligence) is almost synonymous with ChatGPT, a chatbot developed by OpenAI, which is the closest thing tech has had to a magic genie. You just tell ChatGPT what you want in information terms and it serves it up – from writing elaborate essays to advising you on how to clear up your table to even serving up images based on your descriptions. Such is its popularity that at one stage it even overtook the likes of Instagram and TikTok to become the most downloaded app in the world. While almost every major tech brand has its own AI tool (even the mighty Apple is working on one), AI for many still remains ChatGPT. The man behind this phenomenon is Samuel Harris 'Sam' Altman, the 40-year-old CEO of OpenAI, and perhaps the most polarising figure in tech since Steve Jobs. To many, he is a visionary who is changing the world and taking humanity to a better place. To many others, he is a cunning, manipulative person who uses his marketing skills to raise money and is actually destroying the planet. The truth might be somewhere between those two extremes. By some literary coincidence, two books have recently been released on Sam Altman, and are shooting up the bestseller charts. Both are superbly written and researched (based on interviews with hundreds of people), and while they start at almost the same point, they not surprisingly come to rather different conclusions about the man and his work. Those who tend to see Altman as a well-meaning, if occasionally odd, genius will love Keach Hagey's The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future. Hagey is a Wall Street Journal reporter and while she does not put a halo around Altman, her take on the OpenAI CEO reflects the title of the book – she sees Altman as a visionary who is trying to change the world. The fact that Altman collaborated on the book (although he is believed to have thought he was too young for a biography) might have something to do with this, for the book does articulate Altman's vision on a variety of subjects, but most of all, on AI and where it is headed. Although it begins with the events leading up to Altman's being dramatically sacked as the CEO of OpenAI in November 2023, and his equally dramatic reinstatement within days, Hagey's book is a classic biography. It walks us through Altman's childhood, his getting interested in coding and then his decision to drop out from Stanford, before getting into tech CEO mode by first founding social media app Loopt and then joining tech incubator Y Combinator (which was behind the likes of Stripe, Airbnb and Dropbox) after meeting its co-founder Paul Graham, who is believed to have a profound impact on him (Hagey calls him 'his mentor'). Altman also gets in touch with a young billionaire who is very interested in AI and is worried that Google will come out with an AI tool that could ruin the world. Elon Musk in this book is very different from the eccentric character we have seen in the Trump administration, and is persuaded by Altman to invest in a 'Manhattan Project for AI,' which would be open source, and ensure that AI is only used for human good. Musk even proposes a name for it: OpenAI. And that is when things get really interesting. The similarities with Jobs are uncanny. Altman too gets deeply influenced by his parents (his father was known for his kind and generous nature), and like Jobs, although he is a geek, Altman's rise in Silicon Valley is more because of his ability to network and communicate than because of his tech knowledge. In perhaps the most succinct summary of Altman one can find, Hagey writes: 'Altman was not actually writing the code. He was, instead, the visionary, the evangelizer, and the dealmaker; in the nineteenth century, he would have been called 'the promoter.' His speciality, honed over years of advising and then running…Y Combinator, was to take the nearly impossible, convince others that it was in fact possible, and then raise so much money that it actually became possible.' But his ability to sell himself as a visionary and raise funds for causes has also led to Altman being seen as a person who literally moulded himself to the needs of his audience. And this in turn has seen him being seen as someone who indulges in doublespeak and exploits people for his own advantage (an accusation that was levelled at Jobs as well) ) – Musk ends up suing Altman and OpenAI for allegedly not being a non-profit organisation, which it was set up as. While Hagey never accuses Altman of being selfish, it is clear that the Board at OpenAI lost patience with what OpenAI co-founder Ilya Sutstkever refers to as 'duplicity and calamitous aversion to conflict.' It eventually leads to his being sacked by the OpenAI board for not being 'consistently candid in his communications with the board.' Of course, his sacking triggered off a near mutiny in OpenAI with employees threatening to leave, which in turn led to his being reinstated within a few days, and all being seemingly forgotten, if not forgiven. Hagey's book is a compelling read on Altman, his obsession with human progress (he has three hand axes used by hominids in his house), relationships with those he came in touch with, and Silicon Valley politics in general. At about 380 pages, The Optimist is easily the single best book on Altman you can read, and Hagey's brisk narration makes it a compelling read. A much more cynical perception of Altman and OpenAI comes in Karen Hao's much talked-about Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Currently a freelancer who writes for The Atlantic, Hao had previously worked in the Wall Street Journal and had covered OpenAI as back as in 2020, before ChatGPT had made it a household name. As its name indicates, Hao's book is as much about Altman as it is about OpenAI, and the place both play in the artificial intelligence revolution that is currently enveloping the world. At close to 500 pages, it is a bigger book than Hagey's, but reads almost like a thriller, and begins with a bang: 'On Friday, November 17, 2023, around noon Pacific time, Sam Altman, CEO of OpenAI, Silicon Valley's golden boy, avatar of the generative AI revolution, logged on to a Google Meet to see four of his five board members staring at him. From his videosquare, board member Ilya Sutskever, OPenAI's chief scientist, was brief: Altman was being fired.' While Hagey has focused more on Altman as a person, Hao looks at him as part of OpenAI, and the picture that emerges is not a pretty one. The first chapter begins with his meeting Elon Musk ('Everyone else had arrived, but Elon Musk was late usual) in 2015 and discussing the future of AI and humanity with a group of leading engineers and researchers. This meeting would lead to the formation of AI, a name given by Musk. But all of them ended up leaving the organisation, because they did not agree with Altman's perception and vision of AI. Hao uses the incident to show how Altman switched sides on AI, going from being someone who was concerned about AI falling into the wrong hands, to someone who pushed it as a tool for all. Like Hagey, Hao also highlights Altman's skills as a negotiator and dealmaker. However, her take is much darker. Hagey's Altman is a visionary who prioritises human good, and makes the seemingly impossible possible through sheer vision and effort. Hao's Altman is a power hungry executive who uses and exploits people, and is almost an AI colonialist. 'Sam is extremely good at becoming powerful,' says Paul Graham, the man who was Altman's mentor. 'You could parachute him into an island full of cannibals and come back in 5 years and he would be the king.' Hao's book is far more disturbing than Hagey's because it turns the highly rose-tinted view many have not just of Altman and OpenAI, but AI in general, on its head. We get to see a very competitive industry with far too much stress and poor work conditions (OpenAI hires workers in Africa at very low wages), and literally no regard for the environment (AI uses large amounts of water and electricity). OpenAI in Hao's book emerges almost as a sort of modern East India Company, looking to expand influence, territory and profits by mercilessly exploiting both customers and employees. Some might call it too dark, but her research and interviews across different countries cannot be faulted. It would be excessively naive to believe either book as the absolute truth on Altman in particular and OpenAI and AI in general, but they are both must-reads for any person who wants a complete picture of the AI revolution and its biggest brand and face. Mind you, it is a picture that is still in the process of being painted. AI is still in its infancy, and Altman turned forty in April. But as these two excellent books prove, neither is too young to be written about, while definitely being relevant enough to be read about.

What 6 Citadel Securities leaders are reading (or listening to) this summer
What 6 Citadel Securities leaders are reading (or listening to) this summer

Yahoo

time28-07-2025

  • Business
  • Yahoo

What 6 Citadel Securities leaders are reading (or listening to) this summer

A recent newsletter from Citadel Securities included six executives' beach recommendations. Their suggestions get philosophical, ranging from historical deep-dives to the journey of the cell. Here's their chosen media, which includes books, a podcast, and even a YouTube series. For some Citadel Securities leaders, fun in the sun means getting philosophical. A recent newsletter from Ken Griffin's trading firm, which is behind nearly a quarter of all US stock trades, included six executives' beach reads and listens. Their recommendations appeal to a range of potential beachgoers, including everything from diet advice to a 1927 classic to a YouTube series that explains egg freezing. Here are six Citadel Securities executives' summer media recommendations. The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future Matt Culek, the chief operating officer, recommended the book by Wall Street Journal reporter Keach Hagey to anyone who thinks that AI will transform business and daily life — so, basically, everyone. He called it a "compelling account of OpenAI's founding, Altman's leadership, and the fierce competition among leading AI firms." The book, published in May, tracks OpenAI CEO Sam Altman's journey from his childhood in St. Louis, his time at startups, his temporary ouster at OpenAI, and his current leadership. It's based on more than 200 interviews and has 3.97 stars on Goodreads. The Cell: A Visual Tour of the Building Block of Life Chief Technology Officer Josh Woods said that the 2015 book is "as informative as it is visually stunning." Written by the writer and lecturer Jack Challoner, "The Cell" chronicles scientific breakthroughs around life's basic unit, tracking the evolutionary journey from single- to multi-celled organisms. On Goodreads, the book has 4.37 stars. Decisive Moments in History: Twelve Historical Miniatures Shyam Rajan, the global head of fixed income, suggested Austrian writer Stefan Zweig's 1927 classic, which was originally published in German. According to Rajan, the book "thoughtfully captures the catalysts that changed the trajectory of history ranging from the fall of Constantinople to the discovery of the Pacific Ocean." Other vignettes include an affair between a 74-year-old and a 19-year-old, and the story of a man who legally owned a good portion of California. The book has a 4.24 rating on Goodreads. How Not to Die: Discover the Foods Scientifically Proven to Prevent and Reverse Disease It seems like the summer of health for Alex DiLeonardo, the company's chief people officer, who recommended the "eye opening" book about how to prevent chronic illness through nutrition. "Our colleagues take the same optimizing lens to their life that we take to the market," DiLeonardo wrote in his suggestion. Published in 2015 by American physician Michael Greger , "How Not to Die" examines the top 15 causes of prominent diseases. It has 4.42 stars on Goodreads and includes a checklist of the 12 foods Greger thinks we should eat daily. Wind of Change: Did the CIA write a power ballad that ended the Cold War? Dane Skillrud, COO of systematic equities & FICC, recommended a podcast instead of a book. The eight-part miniseries from 2020 is hosted by New Yorker staff writer Patrick Radden Keefe and follows his investigation into whether the CIA wrote the song "Wind of Change" by Scorpion, a German rock band. According to rumors, the CIA wrote the 1990s hit to impact the fall of the USSR. "It's a useful reminder of the importance and power of new ideas, music, and language," Skillrud wrote in his recommendation. The series has 4.8 stars on Spotify. Huge if True: An optimistic show about using science and technology to make the future better For the COO of technology and low latency, Jeff Maurone, summer media means YouTube. He recommended video journalist Cleo Abram's series on the future of technology, saying that "she is a tremendous storyteller who helps me navigate how technology and AI are changing our world." Recent episodes focus on everything from DNA editing to getting sucked into a black hole to egg freezing to interviews with Nvidia CEO Jensen Huang and Meta CEO Mark Zuckerberg. Read the original article on Business Insider

From Sam Altman to a CIA power ballad: What 6 Citadel Securities leaders are reading (or listening to) this summer
From Sam Altman to a CIA power ballad: What 6 Citadel Securities leaders are reading (or listening to) this summer

Business Insider

time28-07-2025

  • Business
  • Business Insider

From Sam Altman to a CIA power ballad: What 6 Citadel Securities leaders are reading (or listening to) this summer

For some Citadel Securities leaders, fun in the sun means getting philosophical. A recent newsletter from Ken Griffin 's trading firm, which is behind nearly a quarter of all US stock trades, included six executives' beach reads and listens. Their recommendations appeal to a range of potential beachgoers, including everything from diet advice to a 1927 classic to a YouTube series that explains egg freezing. Here are six Citadel Securities executives' summer media recommendations. The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future Matt Culek, the chief operating officer, recommended the book by Wall Street Journal reporter Keach Hagey to anyone who thinks that AI will transform business and daily life — so, basically, everyone. He called it a "compelling account of OpenAI's founding, Altman's leadership, and the fierce competition among leading AI firms." The book, published in May, tracks OpenAI CEO Sam Altman 's journey from his childhood in St. Louis, his time at startups, his temporary ouster at OpenAI, and his current leadership. It's based on more than 200 interviews and has 3.97 stars on Goodreads. The Cell: A Visual Tour of the Building Block of Life Chief Technology Officer Josh Woods said that the 2015 book is "as informative as it is visually stunning." Written by the writer and lecturer Jack Challoner, "The Cell" chronicles scientific breakthroughs around life's basic unit, tracking the evolutionary journey from single- to multi-celled organisms. On Goodreads, the book has 4.37 stars. Decisive Moments in History: Twelve Historical Miniatures Shyam Rajan, the global head of fixed income, suggested Austrian writer Stefan Zweig's 1927 classic, which was originally published in German. According to Rajan, the book "thoughtfully captures the catalysts that changed the trajectory of history ranging from the fall of Constantinople to the discovery of the Pacific Ocean." Other vignettes include an affair between a 74-year-old and a 19-year-old, and the story of a man who legally owned a good portion of California. The book has a 4.24 rating on Goodreads. How Not to Die: Discover the Foods Scientifically Proven to Prevent and Reverse Disease It seems like the summer of health for Alex DiLeonardo, the company's chief people officer, who recommended the "eye opening" book about how to prevent chronic illness through nutrition. "Our colleagues take the same optimizing lens to their life that we take to the market," DiLeonardo wrote in his suggestion. Published in 2015 by American physician Michael Greger , "How Not to Die" examines the top 15 causes of prominent diseases. It has 4.42 stars on Goodreads and includes a checklist of the 12 foods Greger thinks we should eat daily. Wind of Change: Did the CIA write a power ballad that ended the Cold War? Dane Skillrud, COO of systematic equities & FICC, recommended a podcast instead of a book. The eight-part miniseries from 2020 is hosted by New Yorker staff writer Patrick Radden Keefe and follows his investigation into whether the CIA wrote the song "Wind of Change" by Scorpion, a German rock band. According to rumors, the CIA wrote the 1990s hit to impact the fall of the USSR. "It's a useful reminder of the importance and power of new ideas, music, and language," Skillrud wrote in his recommendation. The series has 4.8 stars on Spotify. Huge if True: An optimistic show about using science and technology to make the future better For the COO of technology and low latency, Jeff Maurone, summer media means YouTube. He recommended video journalist Cleo Abram's series on the future of technology, saying that "she is a tremendous storyteller who helps me navigate how technology and AI are changing our world." Recent episodes focus on everything from DNA editing to getting sucked into a black hole to egg freezing to interviews with Nvidia CEO Jensen Huang and Meta CEO Mark Zuckerberg.

Hey ChatGPT, which one of these versions truly is the real Sam Altman?
Hey ChatGPT, which one of these versions truly is the real Sam Altman?

Business Standard

time25-05-2025

  • Business
  • Business Standard

Hey ChatGPT, which one of these versions truly is the real Sam Altman?

If the aim is not, in the first place, to help the world, but instead to get bigger - better chips, more data, smarter code - then our problems might just get bigger too NYT By Tim Wu EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao Published by Penguin Press 482 pages $32 THE OPTIMIST: Sam Altman, OpenAI, and the Race to Invent the Future by Keach Hagey Published by 367 pages $31.99 The 'paper clip problem' is a well??'known ethics thought experiment. It imagines a superintelligent AI charged with the seemingly harmless goal of making as many paper clips as possible. Trouble is, as the philosopher Nick Bostrom put it in 2003, without common-sense limits it might transform 'first all of earth and then increasing portions of space into paper clip manufacturing facilities.' The tale has long served as a warning about objectives pursued too literally. Two new books that orbit the entrepreneur Sam Altman and the firm he co-founded, OpenAI, suggest we may already be living with a version of the problem. In Empire of AI, journalist Karen Hao argues that the pursuit of an artificial superintelligence has become its own figurative paper clip factory, devouring too much energy, minerals and human labour. The Optimist, by the Wall Street Journal reporter Keach Hagey, leaves readers suspecting that the earnest and seemingly innocuous paper clip maker who ends up running the world for his own ends could be Altman himself. Hao portrays OpenAI and other companies that make up the fast??'growing AI sector as a 'modern-day colonial world order.' Much like the European powers of the 18th and 19th centuries, they 'seize and extract precious resources to feed their vision of artificial intelligence.' In a corrective to tech journalism that rarely leaves Silicon Valley, Hao ranges well beyond the Bay Area with extensive fieldwork in Kenya, Colombia and Chile. The Optimist is concentrated on Altman's life and times. Born in Chicago to progressive parents named Connie and Jerry, Altman was heavily influenced by their do-gooder spirit. His relentlessly upbeat manner and genuine technical skill made him a perfect fit for Silicon Valley. The arc of Altman's life also follows a classic script. He drops out of Stanford to launch a start??'up that fizzles, but the effort brings him to the attention of Paul Graham, the co-founder of Y Combinator, an influential tech incubator that launched companies like Airbnb and Dropbox. By age 28, Altman has risen to succeed Graham as the organisation's president, setting the stage for his leadership in the AI revolution. As Hagey makes clear, success in this context is all about the way you use the people you know. During the 2010s Altman joined a group of Silicon Valley investors determined to recover the grand ambitions of earlier tech eras. They sought to return to outer space, unlock nuclear fusion, achieve human-level AI and even defeat death itself. The investor Peter Thiel was a major influence, but Altman's most important collaborator was Elon Musk. The early??' 2010s Musk who appears in both books is almost unrecognisable to observers who now associate him with black MAGA hats and chain-saw antics. This Musk, the builder of Tesla and SpaceX, believes that creating superintelligent computer systems is 'summoning the demon.' He becomes obsessed with the idea that Google will soon develop a true artificial intelligence and allow it to become a force for evil. Altman mirrors his anxieties and persuades him to bankroll a more idealistic rival. He pitched a 'Manhattan Project for AI,' a nonprofit to develop a good AI in order to save humanity from its evil twin. Musk guaranteed $1 billion and even supplied the name OpenAI. Hagey's book, written with Altman's cooperation, is no hagiography. The Optimist lets the reader see how thoroughly Altman outfoxed his patron. It's striking that, despite providing much of the initial capital and credibility, Musk ends up with almost nothing to show for his investment. Hao's 2020 profile of OpenAI, published in the MIT Technology Review, was unflattering and the company declined to cooperate with her for her book. She wants to make its negative spillover effects evident. Hao does an admirable job of telling the stories of workers in Nairobi who earn 'starvation wages to filter out violence and hate speech' from ChatGPT, and of visits to communities in Chile where data centres siphon prodigious amounts of water and electricity to run complex hardware. Altman recently told the statistician Nate Silver that if we achieve human-level AI, 'poverty really does just end.' But motives matter. The efficiencies of the cotton gin saved on labour but made slavery even more lucrative. If the aim is not, in the first place, to help the world, but instead to get bigger — better chips, more data, smarter code — then our problems might just get bigger too. Note: The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to AI systems. OpenAI and Microsoft have denied those claims. The reviewer is a law professor at Columbia University ©2025 The New York Times News Service

How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution
How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution

WIRED

time20-05-2025

  • Business
  • WIRED

How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution

May 20, 2025 7:00 AM The AI doomer and the AI boomer both created each other's monsters. An excerpt from The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future. In a new book about Sam Altman, journalist Keach Hagey maps the connections between the minds driving the AI boom. Photo-illustration: Jacqui VanLiew; Getty Images It would be hard to overstate the impact that Peter Thiel has had on the career of Sam Altman. After Altman sold his first startup in 2012, Thiel bankrolled his first venture fund, Hydrazine Capital. Thiel saw Altman as an inveterate optimist who stood at 'the absolute epicenter, maybe not of Silicon Valley, but of a Silicon Valley zeitgeist.' As Thiel put it, 'If you had to look for the one person who represented a millennial tech person, it would be Altman.' Each year, Altman would point Thiel toward the most promising startup at Y Combinator–Airbnb in 2012, Stripe in 2013, Zenefits in 2014–and Thiel would swallow hard and invest, even though he sometimes felt like he was being swept up in a hype cycle. Following Altman's advice brought Thiel's Founders Fund some immense returns. Thiel, meanwhile, became the loudest voice critiquing the lack of true technological progress amidst all the hype. 'Forget flying cars,' he quipped during a 2012 Stanford lecture. 'We're still sitting in traffic.' By the time Altman took over Y Combinator in 2014, he had internalized Thiel's critique of 'tech stagnation' and channeled it to remake YC as an investor in 'hard tech' moonshots like nuclear energy, supersonic planes—and artificial intelligence. Now it was Altman who was increasingly taking his cues from Thiel. And if it's hard to exaggerate Thiel's effect on Altman, it's similarly easy to understate the influence that an AI-obsessed autodidact named Eliezer Yudkowsky had on Thiel's early investments in AI. Though he has since become perhaps the world's foremost AI doomsday prophet, Yudkowsky started out as a magnetic, techno-optimistic wunderkind who excelled at rallying investors, researchers, and eccentrics around a quest to 'accelerate the singularity.' In this excerpt from the forthcoming book The Optimist, Keach Hagey describes how Thiel's relationship with Yudkowsky set the stage for the generative AI revolution: How it was Yudkowsky who first inspired one of the founders of DeepMind to imagine and build a 'superintelligence,' and Yudkowsky who introduced the founders of DeepMind to Thiel, one of their first investors. How Thiel's conversations with Altman about DeepMind would help inspire the creation of OpenAI. And how Thiel, as one of Yudkowsky's most important backers, inadvertently seeded the AI-apocalyptic subcultures that would ultimately play a role in Sam Altman's ouster, years later, as CEO of OpenAI. Like Sam Altman, Peter Thiel had long been obsessed with the possibility that one day computers would become smarter than humans and unleash a self-­reinforcing cycle of exponential technological progress, an old science fiction trope often referred to as 'the singularity.' The term was first introduced by the mathematician and Manhattan Project adviser John von Neumann in the 1950s, and popularized by the acclaimed sci-­fi author Vernor Vinge in the 1980s. Vinge's friend Marc Stiegler, who worked on cybersecurity for the likes of Darpa while drafting futuristic novels, recalled once spending an afternoon with Vinge at a restaurant outside a sci-­fi convention 'swapping stories we would never write because they were both horrific and quite possible. We were too afraid some nutjob would pick one of them up and actually do it.' Among the many other people influenced by Vinge's fiction was Eliezer Yudkowsky. Born into an Orthodox Jewish family in 1979 in Chicago, Yudkowsky was son of a psychiatrist mother and a physicist father who went on to work at Bell Labs and Intel on speech recognition, and was himself a devoted sci-­fi fan. Yudkowsky began reading science fiction at age 7 and writing it at age 9. At 11, he scored a 1410 on the SAT. By seventh grade, he told his parents he could no longer tolerate school. He did not attend high school. By the time he was 17, he was painfully aware that he was not like other people, posting a web page declaring that he was a 'genius' but 'not a Nazi.' He rejected being defined as a 'male teenager,' instead preferring to classify himself as an 'Algernon,' a reference to the famous Daniel Keyes short story about a lab mouse who gains enhanced intelligence. Thanks to Vinge, he had discovered the meaning of life. 'The sole purpose of this page, the sole purpose of this site, the sole purpose of anything I ever do as an Algernon is to accelerate the Singularity,' he wrote. Around this time, Yudkowsky discovered an obscure mailing list of a society calling itself the Extropians, which was the subject of a 1994 article in Wired that happened to include their email address at the end. Founded by philosopher Max More in the 1980s, Extropianism is a form of pro-­science super-­optimism that seeks to fight entropy—­the universal law that says things fall apart, everything tends toward chaos and death—­on all fronts. In practical terms, this meant signing up to have their bodies—­or at least heads—­frozen at negative 321 degrees Fahrenheit at the Alcor Life Extension Foundation in Scottsdale, Arizona, after they died. They would be revived once humanity was technologically advanced enough to do so. More philosophically, fighting entropy meant abiding by five principles: Boundless Expansion, Self-­Transformation, Dynamic Optimism, Intelligent Technology, and Spontaneous Order. (Dynamic Optimism, for example, involved a technique called selective focus, in which you'd concentrate on only the positive aspects of a given situation.) Robin Hanson, who joined the movement and became renowned for creating prediction markets, described attending multilevel Extropian parties at big houses in Palo Alto at the time. 'And I was energized by them, because they were talking about all these interesting ideas. And my wife was put off because they were not very well presented, and a little weird,' he said. 'We all thought of ourselves as people who were seeing where the future was going to be, and other people didn't get it. Eventually—­eventually—­we'd be right, but who knows exactly when.' More's co­founder of the journal Extropy, Tom Bell, aka T. O. Morrow (Bell claims that Morrow is a distinct persona and not simply a pen name), wrote about systems of 'polycentric law' that could arise organically from voluntary transactions between agents free of government interference, and of 'Free Oceana,' a potential Extropian settlement on a man-­made floating island in international waters. (Bell ended up doing pro bono work years later for the Seasteading Institute, for which Thiel provided seed funding.) If this all sounds more than a bit libertarian, that's because it was. The WIRED article opens at one such Extropian gathering, during which an attendee shows up dressed like the 'State,' wearing a vinyl bustier, miniskirt, and chain harness top and carrying a riding crop, dragging another attendee dressed up as 'the Taxpayer' on a leash on all fours. The mailing list and broader Extropian community had only a few hundred members, but among them were a number of famous names, including Hanson; Marvin Minsky, the Turing Award–winning scientist who founded MIT's AI lab in the late 1950s; Ray Kurzweil, the computer scientist and futurist whose books would turn 'the singularity' into a household word; Nick Bostrom, the Swedish philosopher whose writing would do the same for the supposed 'existential risk' posed by AI; Julian Assange, a decade before he founded WikiLeaks; and three people—­Nick Szabo, Wei Dai, and Hal Finney—­rumored to either be or be adjacent to the pseudonymous creator of Bitcoin, Satoshi Nakamoto. 'It is clear from even a casual perusal of the Extropians archive (maintained by Wei Dai) that within a few months, teenage Eliezer Yudkowsky became one of this extraordinary cacophony's preeminent voices,' wrote the journalist Jon Evans in his history of the movement. In 1996, at age 17, Yudkowsky argued that superintelligences would be a great improvement over humans, and could be here by 2020. Two members of the Extropian community, internet entrepreneurs Brian and Sabine Atkins—­who met on an Extropian mailing list in 1998 and were married soon after—­were so taken by this message that in 2000 they bankrolled a think tank for Yudkowsky, the Singularity Institute for Artificial Intelligence. At 21, Yudkowsky moved to Atlanta and began drawing a nonprofit salary of around $20,000 a year to preach his message of benevolent superintelligence. 'I thought very smart things would automatically be good,' he said. Within eight months, however, he began to realize that he was wrong—­way wrong. AI, he decided, could be a catastrophe. 'I was taking someone else's money, and I'm a person who feels a pretty deep sense of obligation towards those who help me,' Yudkowsky explained. 'At some point, instead of thinking, 'If superintelligences don't automatically determine what is the right thing and do that thing that means there is no real right or wrong, in which case, who cares?' I was like, 'Well, but Brian Atkins would probably prefer not to be killed by a superintelligence.' ' He thought Atkins might like to have a 'fallback plan,' but when he sat down and tried to work one out, he realized with horror that it was impossible. 'That caused me to actually engage with the underlying issues, and then I realized that I had been completely mistaken about everything.' The Atkinses were understanding, and the institute's mission pivoted from making artificial intelligence to making friendly artificial intelligence. 'The part where we needed to solve the friendly AI problem did put an obstacle in the path of charging right out to hire AI researchers, but also we just surely didn't have the funding to do that,' Yudkowsky said. Instead, he devised a new intellectual framework he dubbed 'rationalism.' (While on its face, rationalism is the belief that humankind has the power to use reason to come to correct answers, over time it came to describe a movement that, in the words of writer Ozy Brennan, includes 'reductionism, materialism, moral non-­realism, utilitarianism, anti-­deathism and transhumanism.' Scott Alexander, Yudkowsky's intellectual heir, jokes that the movement's true distinguishing trait is the belief that 'Eliezer Yudkowsky is the rightful calif.') In a 2004 paper, 'Coherent Extrapolated Volition,' Yudkowsky argued that friendly AI should be developed based not just on what we think we want AI to do now, but what would actually be in our best interests. 'The engineering goal is to ask what humankind 'wants,' or rather what we would decide if we knew more, thought faster, were more the people we wished we were, had grown up farther together, etc.,' he wrote. In the paper, he also used a memorable metaphor, originated by Bostrom, for how AI could go wrong: If your AI is programmed to produce paper clips, if you're not careful, it might end up filling the solar system with paper clips. In 2005, Yudkowsky attended a private dinner at a San Francisco restaurant held by the Foresight Institute, a technology think tank founded in the 1980s to push forward nanotechnology. (Many of its original members came from the L5 Society, which was dedicated to pressing for the creation of a space colony hovering just behind the moon, and successfully lobbied to keep the United States from signing the United Nations Moon Agreement of 1979 due to its provision against terraforming celestial bodies.) Thiel was in attendance, regaling fellow guests about a friend who was a market bellwether, because every time he thought some potential investment was hot, it would tank soon after. Yudkowsky, having no idea who Thiel was, walked up to him after dinner. 'If your friend was a reliable signal about when an asset was going to go down, they would need to be doing some sort of cognition that beat the efficient market in order for them to reliably correlate with the stock going downwards,' Yudkowsky said, essentially reminding Thiel about the efficient-market hypothesis, which posits that all risk factors are already priced into markets, leaving no room to make money from anything besides insider information. Thiel was charmed. Thiel and Yudkowsky began having occasional dinners together. Yudkowsky came to regard Thiel 'as something of a mentor figure,' he said. In 2005, Thiel started funding Yudkowsky's Singularity Institute, and the following year they teamed up with Ray Kurzweil—­whose book The Singularity Is Near had become a bestseller—­to create the Singularity Summit at Stanford University. Over the next six years, it expanded to become a prominent forum for futurists, transhumanists, Extropians, AI researchers, and science fiction authors, including Bostrom, More, Hanson, Stanford AI professor Sebastian Thrun, XPrize founder Peter Diamandis, and Aubrey de Grey, a gerontologist who claims humans can eventually defeat aging. Skype co­founder Jaan Tallinn, who participated in the summit, was inspired by Yudkowsky to become one of the primary funders of research dedicated to reducing existential risk from AI. Another summit participant, physicist Max Tegmark, would go on to co-found the Future of Life Institute. Vernor Vinge himself even showed up, looking like a public school chemistry teacher with his Walter White glasses and tidy gray beard, cheerfully reminding the audience that when the singularity comes, 'We're no longer in the driver's seat.' In 2010, one of the AI researchers whom Yudkowsky invited to speak at the summit was Shane Legg, a New Zealand–­born mathematician, computer scientist, and ballet dancer who had been obsessed with building superintelligence ever since Yudkowsky had introduced him to the idea a decade before. Legg had been working at Intelligenesis, a New York–­based startup founded by the computer scientist Ben Goertzel that was trying to develop the world's first AI. Its best-­known product was WebMind, an ambitious software project that attempted to predict stock market trends. Goertzel, who had a PhD in mathematics, had been an active poster on the Extropians mailing list for years, sparring affectionately with Yudkowsky on transhumanism and libertarianism. (He was in favor of the former but not so much the latter.) Back in 2000, Yudkowsky came to speak at Goertzel's company (which would go bankrupt within a year). Legg points to the talk as the moment when he started to take the idea of superintelligence seriously, going beyond the caricatures in the movies. Goertzel and Legg began referring to the concept as 'artificial general intelligence.' Legg went on to get his own PhD, writing a dissertation, 'Machine Super Intelligence,' that noted the technology could become an existential threat, and then moved into a postdoctoral fellowship at University College London's Gatsby Computational Neuroscience Unit, a lab that encompassed neuroscience, machine learning, and AI. There, he met a gaming savant from London named Demis Hassabis, the son of a Singaporean mother and Greek Cypriot father. Hassabis had once been the second-­ranked chess player in the world under the age of 14. Now he was focused on building an AI inspired by the human brain. Legg and Hassabis shared a common, deeply unfashionable vision. 'It was basically eye-­rolling territory,' Legg told the journalist Cade Metz. 'If you talked to anybody about general AI, you would be considered at best eccentric, at worst some kind of delusional, nonscientific character.' Legg thought it could be built in the academy, but Hassabis, who had already tried a startup and failed, knew better. The only way to do it was through industry. And there was one investor who would be an obvious place to start: Peter Thiel. Legg and Hassabis came to the 2010 Singularity Summit as presenters, yes, but really to meet Thiel, who often invited summit participants to his townhouse in San Francisco, according to Metz's account. Hassabis spoke on the first day of the summit, which had moved to a hotel in downtown San Francisco, outlining his vision for an AI that took inspiration from the human brain. Legg followed the next day with a talk on how AI needed to be measurable to move forward. Afterward, they went for cocktails at Thiel's Marina District home, with its views of both the Golden Gate Bridge and the Palace of Fine Arts, and were delighted to see a chessboard out on a table. They wove through the crowd and found Yudkowsky, who led them over to Thiel for an introduction. Trying to play it cool, Hassabis skipped the hard sell and began with chess, a topic he knew was dear to Thiel's heart. The game had stood the test of time, Hassabis said, because the knight and bishop had such an interesting tension—­equal in value, but profoundly different in strengths and weaknesses. Thiel invited them to return the next day to tell him about their startup. In the morning, they pitched Thiel, fresh from a workout, across his dining room table. Hassabis said they were building AGI inspired by the human brain, would initially measure its progress by training it to play games, and were confident that advances in computing power would drive their breakthroughs. Thiel balked at first, but over the course of weeks agreed to invest $2.25 million, becoming the as-­yet-­unnamed company's first big investor. A few months later, Hassabis, Legg, and their friend, the entrepreneur Mustafa Suleyman, officially co­founded DeepMind, a reference to the company's plans to combine 'deep learning,' a type of machine learning that uses layers of neural networks, with actual neuroscience. From the beginning, they told investors that their goal was to develop AGI, even though they feared it could one day threaten humanity's very existence. It was through Thiel's network that DeepMind recruited his fellow PayPal veteran Elon Musk as an investor. Thiel's Founders Fund, which had invested in Musk's rocket company, SpaceX, invited Hassabis to speak at a conference in 2012, and Musk was in attendance. Hassabis laid out his 10-­year plan for DeepMind, touting it as a 'Manhattan Project' for AI years before Altman would use the phrase. Thiel recalled one of his investors joking on the way out that the speech was impressive, but he felt the need to shoot Hassabis to save the human race. The next year, Luke Nosek, a cofounder of both PayPal and Founders Fund who is friends with Musk and sits on the SpaceX board, introduced Hassabis to Musk. Musk took Hassabis on a tour of SpaceX's headquarters in Los Angeles. When the two settled down for lunch in the company cafeteria, they had a cosmic conversation. Hassabis told Musk he was working on the most important thing in the world, a superintelligent AI. Musk responded that he, in fact, was working on the most important thing in the world: turning humans into an interplanetary species by colonizing Mars. Hassabis responded that that sounded great, so long as a rogue AI did not follow Musk to Mars and destroy humanity there too. Musk got very quiet. He had never really thought about that. He decided to keep tabs on DeepMind's technology by investing in it. In December 2013, Hassabis stood on stage at a machine-learning conference at Harrah's in Lake Tahoe and demonstrated DeepMind's first big breakthrough: an AI agent that could learn to play and then quickly master the classic Atari video game Breakout without any instruction from humans. DeepMind had done this with a combination of deep neural networks and reinforcement learning, and the results were so stunning that Google bought the company for a reported $650 million a month later. The implications of DeepMind's achievement—­which was a major step toward a general-­purpose intelligence that could make sense of a chaotic world around it and work toward a goal—­were not widely understood until the company published a paper on its findings in the journal Nature more than a year later. But Thiel, as a DeepMind investor, understood them well, and discussed them with Altman. In February 2014, a month after Google bought DeepMind, Altman wrote a post on his personal blog titled 'AI' that declared the technology the most important tech trend that people were not paying enough attention to. 'To be clear, AI (under the common scientific definition) likely won't work. You can say that about any new technology, and it's a generally correct statement. But I think most people are far too pessimistic about its chances,' he wrote, adding that 'artificial general intelligence might work, and if it does, it will be the biggest development in technology ever.' A little more than a year later, Altman teamed up with Elon Musk to cofound OpenAI as a noncorporate counterweight to Google's DeepMind. And with that, the race to build artificial general intelligence was on. This was a race that Yudkowsky had helped set off. But as it picked up speed, Yudkowsky himself was growing increasingly alarmed about what he saw as the extinction-level danger it posed. He was still influential among investors, researchers, and eccentrics, but now as a voice of extreme caution. Yudkowsky was not personally involved in OpenAI, but his blog, LessWrong, was widely read among the AI researchers and engineers who worked there. (While still at Stripe, OpenAI cofounder Greg Brockman had organized a weekly LessWrong reading group.) The rationalist ideas Yudkowsky espoused overlapped significantly with those of the Effective Altruism movement, which was turning much of its attention to preventing existential risk from AI. A few months after this race spilled into full public view with OpenAI's release of ChatGPT in November 2022, Yudkowsky published an essay in Time magazine arguing that unless the current wave of generative AI research was halted, 'literally everyone on Earth will die.' Thiel felt that Yudkowsky had become 'extremely black-pilled and Luddite.' And two of OpenAI's board members had ties to Effective Altruism. Less than a week before Altman was briefly ousted as CEO in the fall of 2023, Thiel warned his friend, 'You don't understand how Eliezer has programmed half the people in your company to believe this stuff.' Thiel's warning came with some guilt that he had created the many-headed monster that was now coming for his friend. Excerpt adapted from The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by Keach Hagey. Published by arrangement with W. W. Norton & Company. Copyright © 2025 by Keach Hagey.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store