Latest news with #MoreEverythingForever
Yahoo
3 days ago
- Politics
- Yahoo
A Reality Check for Tech Oligarchs
Technologists currently wield a level of political influence that was recently considered unthinkable. While Elon Musk's Department of Government Efficiency slashes public services, Jeff Bezos takes celebrities to space on Blue Origin and the CEOs of AI companies speak openly of radically transforming society. As a result, there has never been a better moment to understand the ideas that animate these leaders' particular vision of the future. In his new book, More Everything Forever, the science journalist Adam Becker offers a deep dive into the worldview of techno-utopians such as Musk—one that's underpinned by promises of AI dominance, space colonization, boundless economic growth, and eventually, immortality. Becker's premise is bracing: Tech oligarchs' wildest visions of tomorrow amount to a modern secular theology that is both mesmerizing and, in his view, deeply misguided. The author's central concern is that these grand ambitions are not benign eccentricities, but ideologies with real-world consequences. What do these people envision? In their vibrant utopia, humanity has harnessed technology to transcend all of its limits—old age and the finite bounds of knowledge most of all. Artificial intelligence oversees an era of abundance, automating labor and generating wealth so effectively that every person's needs are instantly met. Society is powered entirely by clean energy, while heavy industry has been relocated to space, turning Earth into a pristine sanctuary. People live and work throughout the solar system. Advances in biotechnology have all but conquered disease and aging. At the center of this future, a friendly AI—aligned with human values—guides civilization wisely, ensuring that progress remains tightly coupled with the flourishing of humanity and the environment. Musk, along with the likes of Bezos and OpenAI's CEO, Sam Altman, aren't merely imagining sci-fi futures as a luxury hobby—they are funding them, proselytizing for them, and, in a growing number of cases, trying to reorganize society around them. In Becker's view, the rich are not merely chasing utopia, but prioritizing their vision of the future over the very real concerns of people in the present. Impeding environmental research, for instance, makes sense if you believe that human life will continue to exist in an extraterrestrial elsewhere. More Everything Forever asks us to take these ideas seriously, not necessarily because they are credible predictions, but because some people in power believe they are. [Read: The rise of techno-authoritarianism] Becker, in prose that is snappy if at times predictable, highlights the quasi-spiritual nature of Silicon Valley's utopianism, which is based on two very basic beliefs. First, that death is scary and unpleasant. And second, that thanks to science and technology, the humans of the future will never have to be scared or do anything unpleasant. 'The dream is always the same: go to space and live forever,' Becker writes. (One reason for the interest in space is that longevity drugs, according to the tech researcher Benjamin Reinhardt, can be synthesized only 'in a pristine zero-g environment.') This future will overcome not just human biology but a fundamental rift between science and faith. Becker quotes the writer Meghan O'Gieblyn, who observes in her book God, Human, Animal, Machine that 'what makes transhumanism so compelling is that it promises to restore through science the transcendent—and essentially religious—hopes that science itself obliterated.' Becker demonstrates how certain contemporary technologists flirt with explicitly religious trappings. Anthony Levandowski, the former head of Google's self-driving-car division, for instance, founded an organization to worship artificial intelligence as a godhead. But Becker also reveals the largely forgotten precedents for this worldview, sketching a lineage of thought that connects today's Silicon Valley seers to earlier futurist prophets. In the late 19th century, the Russian philosopher Nikolai Fedorov preached that humanity's divine mission was to physically resurrect every person who had ever lived and settle them throughout the cosmos, achieving eternal life via what Fedorov called 'the regulation of nature by human reason and will.' The rapture once preached and beckoned in churches has been repackaged for secular times: In place of souls ascending to heaven, there are minds preserved digitally—or even bodies kept alive—for eternity. Silicon Valley's visionaries are, in this view, not all cold rationalists; many of them are dreamers and believers whose fixations constitute, in Becker's view, a spiritual narrative as much as a scientific one—a new theology of technology. Let's slow down: Why exactly is this a bad idea? Who wouldn't want 'perfect health, immortality, yada yada yada,' as the AI researcher Eliezer Yudkowsky breezily summarizes the goal to Becker? The trouble, Becker shows, is that many of these dreams of personal transcendence disregard the potential human cost of working toward them. For the tech elite, these are visions of escape. But, Becker pointedly writes, 'they hold no promise of escape for the rest of us, only nightmares closing in.' Perhaps the most extreme version of this nightmare is the specter of an artificial superintelligence, or AGI (artificial general intelligence). Yudkowsky predicts to Becker that a sufficiently advanced AI, if misaligned with human values, would 'kill us all.' Forecasts for this type of technology, once fringe, have gained remarkable traction among tech leaders, and almost always trend to the stunningly optimistic. Sam Altman is admittedly concerned about the prospects of rogue AI—he famously admitted to having stockpiled 'guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to'—but these worries don't stop him from actively planning for a world reshaped by AI's exponential growth. In Altman's words, we live on the brink of a moment in which machines will do 'almost everything' and trigger societal changes so rapid that 'the future can be almost unimaginably great.' Becker is less sanguine, writing that 'we just don't know what it will take to build a machine to do all the things a human can do.' And from his point of view, it's best that things remain that way. [Read: Silicon Valley braces for chaos] Becker is at his rhetorically sharpest when he examines the philosophy of 'longtermism' that underlies much of this AI-centric and space-traveling fervor. Longtermism, championed by some Silicon Valley–adjacent philosophers and the effective-altruism movement, argues that the weight of the future—the potentially enormous number of human (or post-human) lives to come—overshadows the concerns of the present. If preventing human extinction is the ultimate good, virtually any present sacrifice can and should be rationalized. Becker shows how today's tech elites use such reasoning to support their own dominance in the short term, and how rhetoric about future generations tends to mask injustices and inequalities in the present. When billionaires claim that their space colonies or AI schemes might save humanity, they are also asserting that only they should shape humanity's course. Becker observes that this philosophy is 'made by carpenters, insisting the entire world is a nail that will yield to their ministrations.' Becker's perspective is largely that of a sober realist doing his darnedest to cut through delusion, yet one might ask whether his argument occasionally goes too far. Silicon Valley's techno-utopian culture may be misguided in its optimism, but is it only that? A gentle counterpoint: The human yearning for transcendence stems from a dissatisfaction with the present and a creative impulse, both of which have driven genuine progress. Ambitious dreams—even seemingly outlandish ones—have historically spurred political and cultural transformation. Faith, too, has helped people face the future with optimism. It should also be acknowledged that many of the tech elite Becker critiques do show some awareness of ethical pitfalls. Not all (or even most) technologists are as blithe or blinkered as Becker sometimes seems to suggest. In the end, this is not a book that revels in pessimism or cynicism; rather, it serves as a call to clear-eyed humanism. In Becker's telling, tech leaders err not in dreaming big, but in refusing to reckon with the costs and responsibilities that come with their dreams. They preach a future in which suffering, scarcity, and even death can be engineered away, yet they discount the very real suffering here and now that demands our immediate attention and compassion. In an era when billionaire space races and AI hype dominate headlines, More Everything Forever arrives as a much-needed reality check. At times, the book is something more than that: a valuable meditation on the questionable stories we tell about progress, salvation, and ourselves. Article originally published at The Atlantic


Atlantic
3 days ago
- Business
- Atlantic
Silicon Valley's Misguided Utopian Visions
Technologists currently wield a level of political influence that was recently considered unthinkable. While Elon Musk's Department of Government Efficiency slashes public services, Jeff Bezos takes celebrities to space on Blue Origin and the CEOs of AI companies speak openly of radically transforming society. As a result, there has never been a better moment to understand the ideas that animate these leaders' particular vision of the future. In his new book, More Everything Forever, the science journalist Adam Becker offers a deep dive into the worldview of techno-utopians such as Musk—one that's underpinned by promises of AI dominance, space colonization, boundless economic growth, and eventually, immortality. Becker's premise is bracing: Tech oligarchs' wildest visions of tomorrow amount to a modern secular theology that is both mesmerizing and, in his view, deeply misguided. The author's central concern is that these grand ambitions are not benign eccentricities, but ideologies with real-world consequences. What do these people envision? In their vibrant utopia, humanity has harnessed technology to transcend all of its limits—old age and the finite bounds of knowledge most of all. Artificial intelligence oversees an era of abundance, automating labor and generating wealth so effectively that every person's needs are instantly met. Society is powered entirely by clean energy, while heavy industry has been relocated to space, turning Earth into a pristine sanctuary. People live and work throughout the solar system. Advances in biotechnology have all but conquered disease and aging. At the center of this future, a friendly AI—aligned with human values—guides civilization wisely, ensuring that progress remains tightly coupled with the flourishing of humanity and the environment. Musk, along with the likes of Bezos and OpenAI's CEO, Sam Altman, aren't merely imagining sci-fi futures as a luxury hobby—they are funding them, proselytizing for them, and, in a growing number of cases, trying to reorganize society around them. In Becker's view, the rich are not merely chasing utopia, but prioritizing their vision of the future over the very real concerns of people in the present. Impeding environmental research, for instance, makes sense if you believe that human life will continue to exist in an extraterrestrial elsewhere. More Everything Forever asks us to take these ideas seriously, not necessarily because they are credible predictions, but because some people in power believe they are. Becker, in prose that is snappy if at times predictable, highlights the quasi-spiritual nature of Silicon Valley's utopianism, which is based on two very basic beliefs. First, that death is scary and unpleasant. And second, that thanks to science and technology, the humans of the future will never have to be scared or do anything unpleasant. 'The dream is always the same: go to space and live forever,' Becker writes. (One reason for the interest in space is that longevity drugs, according to the tech researcher Benjamin Reinhardt, can be synthesized only 'in a pristine zero-g environment.') This future will overcome not just human biology but a fundamental rift between science and faith. Becker quotes the writer Meghan O'Gieblyn, who observes in her book God, Human, Animal, Machine that 'what makes transhumanism so compelling is that it promises to restore through science the transcendent—and essentially religious—hopes that science itself obliterated.' Becker demonstrates how certain contemporary technologists flirt with explicitly religious trappings. Anthony Levandowski, the former head of Google's self-driving-car division, for instance, founded an organization to worship artificial intelligence as a godhead . But Becker also reveals the largely forgotten precedents for this worldview, sketching a lineage of thought that connects today's Silicon Valley seers to earlier futurist prophets. In the late 19th century, the Russian philosopher Nikolai Fedorov preached that humanity's divine mission was to physically resurrect every person who had ever lived and settle them throughout the cosmos, achieving eternal life via what Fedorov called 'the regulation of nature by human reason and will.' The rapture once preached and beckoned in churches has been repackaged for secular times: In place of souls ascending to heaven, there are minds preserved digitally—or even bodies kept alive—for eternity. Silicon Valley's visionaries are, in this view, not all cold rationalists; many of them are dreamers and believers whose fixations constitute, in Becker's view, a spiritual narrative as much as a scientific one—a new theology of technology. Let's slow down: Why exactly is this a bad idea? Who wouldn't want 'perfect health, immortality, yada yada yada,' as the AI researcher Eliezer Yudkowsky breezily summarizes the goal to Becker? The trouble, Becker shows, is that many of these dreams of personal transcendence disregard the potential human cost of working toward them. For the tech elite, these are visions of escape. But, Becker pointedly writes, 'they hold no promise of escape for the rest of us, only nightmares closing in.' Perhaps the most extreme version of this nightmare is the specter of an artificial superintelligence, or AGI (artificial general intelligence). Yudkowsky predicts to Becker that a sufficiently advanced AI, if misaligned with human values, would 'kill us all.' Forecasts for this type of technology, once fringe, have gained remarkable traction among tech leaders, and almost always trend to the stunningly optimistic. Sam Altman is admittedly concerned about the prospects of rogue AI —he famously admitted to having stockpiled 'guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to'—but these worries don't stop him from actively planning for a world reshaped by AI's exponential growth. In Altman's words, we live on the brink of a moment in which machines will do 'almost everything' and trigger societal changes so rapid that 'the future can be almost unimaginably great.' Becker is less sanguine, writing that 'we just don't know what it will take to build a machine to do all the things a human can do.' And from his point of view, it's best that things remain that way. Becker is at his rhetorically sharpest when he examines the philosophy of 'longtermism' that underlies much of this AI-centric and space-traveling fervor. Longtermism, championed by some Silicon Valley–adjacent philosophers and the effective-altruism movement, argues that the weight of the future—the potentially enormous number of human (or post-human) lives to come—overshadows the concerns of the present. If preventing human extinction is the ultimate good, virtually any present sacrifice can and should be rationalized. Becker shows how today's tech elites use such reasoning to support their own dominance in the short term, and how rhetoric about future generations tends to mask injustices and inequalities in the present. When billionaires claim that their space colonies or AI schemes might save humanity, they are also asserting that only they should shape humanity's course. Becker observes that this philosophy is 'made by carpenters, insisting the entire world is a nail that will yield to their ministrations.' Becker's perspective is largely that of a sober realist doing his darnedest to cut through delusion, yet one might ask whether his argument occasionally goes too far. Silicon Valley's techno-utopian culture may be misguided in its optimism, but is it only that? A gentle counterpoint: The human yearning for transcendence stems from a dissatisfaction with the present and a creative impulse, both of which have driven genuine progress. Ambitious dreams—even seemingly outlandish ones—have historically spurred political and cultural transformation. Faith, too, has helped people face the future with optimism. It should also be acknowledged that many of the tech elite Becker critiques do show some awareness of ethical pitfalls. Not all (or even most) technologists are as blithe or blinkered as Becker sometimes seems to suggest. In the end, this is not a book that revels in pessimism or cynicism; rather, it serves as a call to clear-eyed humanism. In Becker's telling, tech leaders err not in dreaming big, but in refusing to reckon with the costs and responsibilities that come with their dreams. They preach a future in which suffering, scarcity, and even death can be engineered away, yet they discount the very real suffering here and now that demands our immediate attention and compassion. In an era when billionaire space races and AI hype dominate headlines, More Everything Forever arrives as a much-needed reality check. At times, the book is something more than that: a valuable meditation on the questionable stories we tell about progress, salvation, and ourselves.


NZ Herald
11-05-2025
- Science
- NZ Herald
Mission critical: Expert eye on frightening fallacies and fantasies behind tech titans' AI visions
Adam Becker's More Everything Forever is a must-read if you want to understand the dangerous new world we find ourselves in. Photo / Getty Images Billionaire pseudo-libertarian tech-bros with their utopian fever dreams terrify me. They clearly read a lot of William Gibson and Nietzsche while getting the wrong end of the stick. The likes of Elon Musk, Jeff Bezos, Mark Zuckerberg, Peter Thiel and Sam Altman labour under the delusion that some lucky investments in other people's ideas make them natural Übermenschen to 'benignly' micromanage the destiny of humanity. American author, science philosopher and trained astrophysicist Adam Becker, in his compulsive, brilliantly written book More Everything Forever: AI Overlords, Space Empires and Silicon Valley's Crusade to Control the Fate of Humanity, comes at the subject with the exasperated contempt it deserves. Think of it as Fear and Loathing in Silicon Valley. Becker digs into these fanboy fantasias of the future and squashes the plausibility of most of the 'science'. It helps that Becker moves in the same elevated circle of intellectuals and consultants, and speaks with first-hand understanding. Rather than presenting real science, it's a toxic red-blooded protein shake of authoritarian ambition, Spenglerian pseudoscience, eugenics, back-of-the-envelope futurism and raging megalomaniacal narcissism. What's more, it's a cynical distraction from genuine issues like the climate crisis and social inequality. Usefully, Becker breaks the bilge down into the key ideas and players into digestible slices before demolishing them. For example, a central concept to the mindset (or snow job) is a philosophical argument called 'effective altruism'. On one level it sounds thoroughly anodyne – the evidence-based maximising of limited resources for the greater good. That's all fine and dandy until it gets into its logical cups and heads into 'useless eaters' territory and obsessing about demographic IQ scores. This dangerously unempathetic, utilitarian take on ethics underpins much of the tech-broligarchy's thinking, but paradoxically, it seems to co-exist with the notion that an infinitely expanding post-scarcity society is just around the corner. The tech-broligarchs, like Lewis Carroll's White Queen, believe as many as six impossible things before breakfast. Often what it renders down to is highly privileged, poorly socialised, rather immature individuals from engineering backgrounds blundering with a utilitarian hammer into other specialities from biology to the humanities, each with their own subtleties and nuances, and just seeing fields of nails. However, a lot of what these would-be messiahs are pushing is just not scientifically feasible in the near future or even as a pipe dream. Becker is very good at pouring cold water on favourite hobby-horses such as AI singularity – the point where AI surpasses human intelligence and controls its own technological advance. Futurist Ray Kurzweil's mind-uploads-to-computers nonsense is similarly debunked (too fundamentally different and incompatible), along with spreading out into the universe to infinity and beyond (not enough energy and too far away). The numbers never add up. Think about it. Musk wants to colonise Mars in a decade, but we still have no feasible way of shielding against the radiation, there's no industrial base there to keep the whole thing functioning, and we don't even know if human beings can survive long term at about a third of Earth's gravity. Yet Musk, typical of the movement, gives the impression it's just a matter of Schopenhauerian willpower and throwing enough government subsidies at it. It's never entirely clear how seriously the tech bros actually take these fancies, or whether it's just window dressing for something more concrete: political control. Becker is forensic in dissecting the putsch for power, all the more impressive given the bulk of the book must have been completed before it became crystal clear Trump was a sort of Trojan donkey for these people. So no, we aren't all going to be golden immortal cyborg demigods colonising the stars with our perfect AI husbandos and waifus. It's not going to be Star Trek. It's not even going to be like Ben Elton's 1989 satire Stark, in which a bunch of billionaires decide to tank planet Earth and escape to the moon. If you found Douglas Rushkoff's 2022 book Survival of the Richest: Escape Fantasies of the Tech Billionaires as morbidly fascinating as I did, More Everything Forever is a thoroughly worthy companion volume. It will leave you wishing they'd all get on Elon's rocket to Mars, while also wondering who will scrub the gold toilets for them when the robots rebel. It's a must-read if you want to understand the dangerous new world we find ourselves in. 'The futures of technological salvation,' says Becker, 'are sterile impossibilities, and they would be brutally destructive if they did come to pass. The cosmos is more than a giant well of resources, and humans are more than siphons sucking it dry. But I can't offer a specific future as an alternative. What I can tell you is that anyone who claims to know the one inevitable future, or the one good path for humanity, is someone who deserves your deepest scepticism.' More Everything Forever: AI Overlords, Space Empires and Silicon Valley's Crusade to Control the Fate of Humanity, by Adam Becker (Hachette, $39.99), is out now.


New York Times
23-04-2025
- Science
- New York Times
Go to Mars, Never Die and Other Big Tech Pipe Dreams
Elon Musk predicts that a million Earthlings will be living on Mars in 20 years — not just for the exciting adventure but as a matter of survival: 'We must preserve the light of consciousness by becoming a space-faring civilization & extending life to other planets.' Not so fast, says the science journalist Adam Becker. As he puts it in his smart and wonderfully readable new book, 'More Everything Forever,' life on Mars is bound to be worse than life on our own planet, however much ecological havoc we have wreaked. Becker, who has a Ph.D. in astrophysics and is the author of a previous (equally readable) book about quantum theory, clearly lays out the many problems of getting to, and surviving on, the Red Planet. There is the not insignificant issue of enormous amounts of surface radiation. There is also the not insignificant issue of the toxic dust. Exposure to Martian air will boil the saliva off your tongue before it asphyxiates you. And even if astronauts manage to build a system of pressurized tunnels for living underground — a very big if, given the difficulties of getting astronauts there, let alone construction materials — the number of people living in such bunkers would have to be pretty small. They would require regular shipments of food and water from Earth, presumably via Musk's company SpaceX. 'Even the air the Mars residents breathe would cost money,' Becker writes. It sounds like a miserable way to live. 'Mars would make Antarctica look like Tahiti.' The plan to colonize Mars is just one of the fantastical scenarios Becker writes about in 'More Everything Forever,' which traces the various plans advanced by billionaire tech entrepreneurs in their grand bids to 'save humanity.' From artificial intelligence to colonizing outer space, the animating force behind such projects is what Becker calls 'the ideology of technological salvation.' The ideas it propagates have three main features, he says. First, they are reductive. Second, they are profitable, aligning neatly with the tech industry's imperative of perpetual growth. Third, and most important, they offer transcendence — the promise of an imagined end that justifies blowing through any actual limits, including conventional morality. The futuristic visions that flow from this ideology are binary: paradise or annihilation. Becker draws an incisive portrait of the debates over artificial intelligence, showing how A.I.'s champions and doomsayers occupy two sides of the same coin. On one side are techno-optimists like Ray Kurzweil, who predicts a day when all-powerful machines will eliminate poverty and disease and allow us to 'live as long as we want.' The doomsayers, by contrast, worry about 'A.I. alignment,' or the prospect that such machines will one day take our jobs or even kill us all. An influential thought experiment among the doomsayers involves a 'superintelligence' whose sole goal is to manufacture as many paper clips as possible; eventually this creature turns everything into paper clips. Toggling between dystopian warnings and promises of deliverance are the tech entrepreneurs. Becker cites Sam Altman, the C.E.O. of OpenAI, who has proposed that his company capture the wealth created by A.I. and ameliorate the socioeconomic fallout by redistributing part of that wealth to the public. 'The changes coming are unstoppable,' Altman once wrote, yet 'the future can be almost unimaginably great.' Becker argues that Silicon Valley's preoccupations have created their own kind of warped ethics. 'The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more — to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, to justify nearly any action they might want to take — all in the name of saving humanity from a threat that doesn't exist, aiming at a utopia that will never come.' While tech moguls make passing mention of how A.I. will bring untold abundance, the grubbier problems of the here and now typically get less attention in Silicon Valley than spectacular thought experiments on 'existential risk,' however far-fetched. If you're a billionaire who has been richly rewarded for your contrarian moonshots, why waste time analyzing stubbornly ordinary problems, like poverty and inequality, when you could be dreaming about colonizing the galaxy and thwarting runaway paper-clip machines? And so Silicon Valley has given a lot of money to the effective altruism community, which has provided scholarly legitimacy to tech billionaires' hobbyhorses. Effective altruists encourage the use of reason and data for making philanthropic decisions, but Becker highlights how some of their most influential thinkers have come up with truly bizarre 'longtermist' calculations by multiplying minuscule probabilities of averting a hypothetical cataclysm with gargantuan estimates of 'future humans' saved. One prominent paper concluded that $100 spent on A.I. safety saves one trillion future lives — making it 'far more' valuable 'than the near-future benefits' of distributing anti-malarial bed nets. 'For a strong longtermist,' Becker writes, 'investing in a Silicon Valley A.I. safety company is a more worthwhile humanitarian endeavor than saving lives in the tropics.' Tech billionaires' pet projects can sound deliriously futuristic, but lurking underneath them all is an obsession that is very old. It's the primal fear of death, encased in a shiny new rocket ship. Becker quotes other writers who have noticed how Silicon Valley, with its omnivorous appetite, has turned existential angst into yet another input. 'Space has become the ultimate imperial ambition,' the scholar Kate Crawford writes in 'Atlas of A.I.,' 'symbolizing an escape from the limits of Earth, bodies and regulation.' In 'God, Human, Animal, Machine' (2021), Meghan O'Gieblyn describes how technology took over the domain of religion and philosophy: 'All the eternal questions have become engineering problems.' The 'ideology of technological salvation' that Becker identifies can therefore be understood, too, as a desperate attempt to deal with despair. Amid his sharp criticisms of the tech figures he writes about is a resolute call for compassion. He encourages us not to get hung up on galaxies far, far away but to pay more attention to our own fragile planet and the frail humans around us. 'We are here now, in a world filled with more than we could ever reasonably ask for,' Becker writes. 'We can take joy in that, and find satisfaction and meaning in making this world just a little bit better for everyone and everything on it, regardless of the ultimate fate of the cosmos.'