Latest news with #MoreEverythingForever


Time of India
16-07-2025
- Science
- Time of India
Elon Musk's Mars plan is a dangerous illusion, warns astrophysicist Adam Becker
Elon Musk's long-standing dream to colonize Mars has come under sharp criticism from renowned astrophysicist and author Adam Becker, who calls it "the stupidest thing" one could pursue. In a recent interview and in his new book More Everything Forever, Becker argues that efforts by billionaires like Musk and Jeff Bezos to settle Mars are nothing more than 'sci-fi fantasies' detached from scientific and ethical realities. Despite Musk's framing of Mars as a backup plan for humanity in the event of a global catastrophe, Becker insists that even a damaged Earth would remain far more habitable than the red planet. He believes these grandiose space ambitions are more about escaping fears than solving real problems. Elon Musk's Mars vision: 'Stupidest thing' says Becker Becker takes a blunt stance on Musk's idea of Mars as a 'lifeboat' for humanity. 'We could get hit with an asteroid, detonate every nuclear weapon, or see the worst-case scenario for climate change, and Earth would still be more habitable than Mars,' he told Rolling Stone. He cites the planet's lack of breathable air, radiation exposure, and extreme conditions as insurmountable barriers. His critique also targets the illusion that technology alone can make Mars livable. 'Any cursory examination of facts about Mars makes it clear, it's not a place for humans,' Becker asserts. Childhood wonder meets scientific reality Once a strong believer in space colonization, Becker admits that his views changed as he studied the harsh truths of space environments. 'As I got older, I realized, 'Oh, that's not happening.' We're not going to go to space, and certainly not to make things better,' he told The Harvard Gazette. He accuses tech billionaires of pouring resources into escapist dreams instead of addressing problems on Earth. According to him, their space crusades reflect deep-seated fears rather than rational strategy. Scientists warn of risks in billionaire space ambitions Becker isn't the only critic. Fellow astrophysicist Lawrence Krauss has also denounced Musk's Mars plan, calling it 'logistically ludicrous' and 'scientifically and politically dangerous.' Despite these warnings, Musk remains committed to his goal of building a Mars colony with at least one million people, positioning SpaceX as the spearhead of this vision. Yet experts argue that such ambitions, if rushed or poorly planned, could have disastrous consequences both scientifically and socially.
Yahoo
14-07-2025
- Entertainment
- Yahoo
What Is Up With These Tech Billionaires? This Astrophysicist Has Answers
Fresh off a Ph.D. in astrophysics, science journalist Adam Becker moved to Silicon Valley with an academic's acclimation to hearing the word 'no.' 'In academic science, you need to doubt yourself,' he says. 'That's essential to the process.' So it was strange to find himself suddenly surrounded by a culture that branded itself as data-oriented and scientific but where, he soon came to realize, the ideas were more grounded in science fiction than in actual science and the grip on reality was tenuous at best. 'What this sort of crystallized for me,' says Becker, 'was that these tech guys — who people think of as knowing a lot about science — actually, don't really know anything about science at all.' In More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, published this spring, Becker subjects Silicon Valley's ideology to some much-needed critical scrutiny, poking holes in — and a decent amount of fun at — the outlandish ideas that so many tech billionaires take as gospel. In so doing, he champions reality while also exposing the dangers of letting the tech billionaires push us toward a future that could never actually exist. 'The title of the book is More Everything Forever,' says Becker. 'But the secret title of the book, like, in my heart is These Fucking People.' More from Rolling Stone It's Been One Year Since Musk Endorsed Trump. Was It Worth It? Grok Says It's Done Posting 'Hitler Fanfic' Blue States Invest Retirees' Savings in Firms Boosting Trump's Extreme Agenda Over several Zooms, Rolling Stone recently chatted with Becker about these fucking people, their magical thinking, and what the rest of us can do to fight for a reality that works for us. A lot of people who move to Silicon Valley get swept up in its vibe. How did you avoid it? I did sort of see the glittering temptation of Silicon Valley, but there's a toxic positivity to the culture. The startup ethos out here runs on positive emotion, and especially hype. It needs hype. It can't function without it. It's not enough that your startup could be widely adopted. It needs to change the world. It has to be something that's going to make everything better. So this ends up becoming an exercise in meaning-making, and then people start talking about these startups — their own or other people's — in semi-religious or explicitly-religious terms. And it was just a shock to see all of these people talking this way. It all feels plastic and fake. I thought, Oh wow, this is awful. I want to watch these people and see what the hell they're up to. I want to understand what is happening here, because this is bad. And what were they up to, as far as you could tell? Underpinning a lot of that toxic positivity was this idea that if you just make more tech, eventually tech will improve itself and become super-intelligent and godlike. [The technocrats] subscribe to a kind of ideology of technological salvation — and I use that word 'salvation' very deliberately in the Christian sense. They believe that technology is going to bring about the end of this world and usher in a new perfect world, a kind of transhumanist, algorithmically-guaranteed utopia, where every problem in the world gets reduced to a problem that can be solved with technology. And this will allow for perpetual growth, which allows for perpetual wealth creation and resource extraction. These are deeply unoriginal ideas about the future. They're from science fiction, and I didn't know how seriously people were taking them. And then I started seeing people take them very, very seriously indeed. So, I was like, 'OK, let me go talk to actual experts in the areas these people are talking about.' I talked to the experts, and: Yeah, it's all nonsense. What exactly is nonsensical about it? It's a story that is based on a lot of ideas that have no evidence for them and a great deal of evidence against them. It's based on a lot of wrong ideas. For example, I think the public perception of AI has been driven by narratives that have no foundation in reality. What does it mean to say a machine is as intelligent as a human? What does 'intelligence' mean? What does it mean to say that an intelligent machine could design an even more intelligent one? Intelligence is not this monolithic thing that is measured by IQ tests, and the history of humans trying to think about intelligence as a monolithic thing is a deeply troubling and problematic history that usually gets tied to eugenics and racism because that's what those tests were invented for. And so, unsurprisingly, there's a fair amount of eugenics and racism thrown around in these communities that discuss these ideas really seriously. There's also no particular reason to believe that the kinds of machines that we are building now and calling 'AI' are sufficiently similar to the human brain to be able to do what humans do. Calling the systems that we have now 'AI' is a kind of marketing tool. You can see that if you think about the deflation in the term that's occurred just in the last 30 years. When I was a kid, calling something 'AI' meant Commander Data from Star Trek, something that can do what humans do. Now, AI is, like, really good autocomplete. That's not to say that it would never be possible to build an artificial machine that does what humans do, but there's no reason to think that these can and a lot of reason to think that they can't. And the self-improvement thing is kind of silly, right? It's like saying, 'Oh, you can become an infinitely good brain surgeon by doing brain surgery on the brain surgery part of your brain.' Can you explain the difference between the systems we have now, which we call 'AI,' and the systems that would qualify as AGI? How big is the gulf and what are the major impediments to bridging it? So one of the problems here is that 'AGI' is ill-defined, and the vagueness is strategically useful for the people who talk about this stuff. But put that aside and just take a look at what a large language model like ChatGPT does. It's a text generation engine. I feel like that's a much better way of talking about it than calling it 'AI.' ChatGPT only cares about one thing: Generating the next word based on what words have already been generated and produced in the conversation so far. And to do that, ChatGPT consumes roughly the entire internet. It was trained on the entire Internet to pull out statistical patterns in the language usage. It's like this smeared-out average voice of the internet, and when you ask it a question, all it cares about is answering that question in that voice. It doesn't care about things like answering the question correctly. That only happens accidentally as a result of trying to sound like the text it was trained on. And so when these machines, quote-unquote, 'hallucinate,' when they make things up and get things wrong, they're not doing anything differently than they're doing when they get the right answer, because they only know how to do one thing. They're constantly hallucinating. That's all they do. So what we're calling 'artificial intelligence' is really just kind of like an advanced version of spellcheck? Yeah, in a way. I mean, this is not even the first time in the history of AI that people have been having conversations with these machines and thinking, 'Oh wow, there's actually something in there that's intelligent and helping me.' Back in the 1960s, there was this program called Eliza, that basically acted like a very simple version of a therapist that just reflects everything that you say back to you. So you say, 'Hey Eliza, I had a really bad day today,' and Eliza says, 'Oh, I'm really sorry to hear that. Why did you have a really bad day today?' And then you say, 'I got in a fight with my partner,' and they say, 'Oh, I'm really sorry to hear that. Why did you get in a fight with your partner?' I mean, it's a little bit more complicated than that but not a lot more complicated than that. It just kind of fills in the blanks. These are stock responses — something that's very clearly not thinking. And people would say, 'Oh, Eliza really helped me. I feel like Eliza really understands who I am.' The human impulse for connection is powerful. Precisely. It's the human impulse for connection — and the impulse to attribute human-like characteristics to things that are not humans, which we do constantly. We do it with our pets. We do it with random patterns that we find in nature. We'll see an arrangement of rocks and think, 'Oh, that's a smiley face.' That's called 'pareidolia.' And that's what this is. So current AI is not even close to being human, but these tech titans think it could be godlike? Sam Altman gave a talk two or three years ago, and he was asked a question about global warming, and he said something like, 'Oh, global warming is a really serious problem, but if we have a super-intelligent AI, then we can ask it, 'Hey, how do you build a lot of renewable energy? And hey, how do you build a lot of carbon capture systems? And hey, how do we build them at scale cheaply and quickly?' And then it would solve global warming.' What Sam Altman is saying is that his plan for solving global warming is to build a machine that nobody knows how to build and can't even define and then ask it for three wishes. But they really believe that this is coming. Altman said earlier this year that he thinks that AGI is coming in the next four years. If a godlike AI is coming, then global warming doesn't matter. All that matters is making sure that the godlike AI is good and comes soon and is friendly and helpful to us. And so, suddenly, you have a way of solving all of the problems in the world with this one weird trick, and that one weird trick is the tech that these companies are building. It offers the possibility of control, it offers the possibility of transcendence of all boundaries, and it offers the possibility of tremendous amounts of money. If you have an understanding of what the technology is doing right now — versus some magical idea of what it could be doing — it sounds like it would be hard to trust it with the future of humanity. Is it just complete delusion? There's a lot of delusional thinking at work, and it's really, really easy to believe stuff that makes you rich. But there's also a lot of groupthink. If everybody around you believes this, then that makes it more likely that you're going to believe it, too. And then if all of the most powerful people and the wealthiest people and the most successful people and the most intelligent-seeming people around you all believe this, it's going to make it harder for you not to believe it. And the arguments that they give sound pretty good at first blush. You have to really drill down to find what's wrong with them. If you were raised on a lot of science fiction, especially, these ideas are very familiar to you — and I say this as a huge science fiction fan. And so when you start looking at ideas like super-intelligent AI or going to space, these ideas carry a lot of cultural power. The point is, it's very easy for them to believe these things, because it goes along with this picture of the future that they already had, and it offers to make them a lot of money and give them a lot of power and control. It gives them the possibility of ignoring inconvenient problems, problems that often they themselves are contributing to through their work. And it also gives them a sense of moral absolution and meaning by providing this grand vision and project that they're working toward. They want to save humanity. [Elon] Musk talks about this all the time. [Jeff] Bezos talks about this. Altman talks about this. They all talk about this. And I think that's a pretty powerful drug. Then throw in, for the billionaires, the fact that when you're a billionaire, you get insulated from the world and from criticism because you're surrounded by sycophants who want your money, and it becomes very hard to change your mind about anything. Your reality testing gets pretty messed up. Yeah, exactly. Also, a lot of these ideas just sound ridiculous, and so there hasn't been as much trenchant criticism as there should have been for the past decades. And now, suddenly, these guys have lots of money, and they're saying what the future is, and people are just believing that. So what you're telling me is that I'm not gonna get to live on Mars. Yeah, that's right. You're not going to. But you shouldn't be disappointed because Mars sucks. Mars fucking sucks. Just to name a few of the problems: gravity is too low, the radiation is too high, there's no air, and the dirt is made of poison. Sounds fun. Also you're going to freeze even if you solve all of those problems. I mean there are some spots where you wouldn't freeze if you really bundled up, but Elton John was right: Mars isn't the place to raise your kid. It's really terrifying to see the most powerful people in the world — and some of the loudest voices in the world — confuse these beliefs with reality. You talk about in the book about how this is a sort of messianic belief, but also about how technological utopia won't be available to everyone — which is a pretty common view in apocalyptic narratives, right? There's a chosen group that will get to enjoy the utopia, but not everyone will. Look, inequality is a fundamental feature of the world, and I think nobody knows that better than these billionaires. I don't mean 'fundamental' in the sense that it's unalterable. I just mean it's fundamental to how we've structured our society, and billionaires are beneficiaries of that. But I think that in the version of these utopias that are promoted by these tech billionaires, there are definitely unseen and unquestioned forms of inequality that would lead to some people having a lot more control and a lot more of that utopia than other people would get. A lot of this is in the form of questions that, surprisingly, people don't tend to ask these tech billionaires. Jeff Bezos says that he wants humanity living in giant space stations that have millions of people, and he wants millions of these space stations, so there'll be one trillion people in space generations from now. And that leads to questions like, 'OK, buddy, who's gonna own that?' One of the nice things about living on Earth is that we have these shared natural resources. If you go out into space into an artificial environment that, say, Blue Origin is going to be building, doesn't that mean that Blue Origin or some successor company is going to own those space stations and all of the air and water and whatnot inside? And doesn't that mean that there's somebody who's going to be effectively king of the space station? And if everybody lives in these space stations, isn't that going be not just a company town but a company civilization? Musk talks about a city with a million people on Mars. The air won't even be free, right? You'll have to pay Musk just to stay alive. That's not my vision of utopia, and I think not many other people's either. It seems pretty unlikely that these guys are going to get this utopia of which they dream, so how concerned should we even be about their delusions? They have so much power and so much money that the choices that they make about how to exercise that power and spend that money unavoidably affect the rest of us. This is a real danger that we are seeing and experiencing right now. Musk thinks that his mission to go to Mars and beyond is the salvation of humanity — he has said as much in as many words — and he believes that, therefore, nothing should be allowed to stand in his way, not even law. So, therefore, he supported a lawless candidate for President of the United States, a literal felon, and said that it was important for the future of humanity that that felon win. This is a billionaire interfering with the democratic process and trying to erode the democratic fabric of this country — and succeeding — in order to pursue his own personal vision of utopia that will never happen. That's a fucking problem. And that makes it everybody's business. I suppose it's also a question of who gets to decide which problems are humanity's biggest. Which is what a lot of this comes down to, right? Part of the problem with trying to solve issues in the world through billionaire philanthropy is that it's fundamentally undemocratic who gets to make the decision: The billionaire gets to make the decision. Who elected the billionaire? Nobody. And so billionaire philanthropy is an exercise of power and deserves skepticism rather than gratitude. But I think a lot of these billionaires see wealth as proof of someone's value and intelligence, and since they're the wealthiest people who have ever lived, that makes them the smartest people who have ever lived and so they are the ones who should be leading us into this new utopia. And if the rest of us can't see it or think that it doesn't work, well, that's because we're not as smart as they are. And if experts tell them that it can't work, well, then the experts are wrong, because, you know, if they are smart, why are they so poor? It's like [these technocrats] are constantly high on a drug called billions of dollars, and the human brain was not built to deal with that. It insulates them from criticism and makes it harder for them to think critically. What can we do about all this? Are we all just basically fucked? Well, look, the billionaires have an enormous amount of power and money, but there's a lot more of us than there are of them. Also, we can think critically, and so I think there's a few different things that we can do. In the short term we need to organize. One of the things that these guys are completely terrified by — and it's one of the reasons they love AI — is the idea of labor organization. They don't want workers rising up. They don't want to have to deal with workers at all, and so I think labor organizing is really important. I think political organizing is really important. We need to build political power structures that can counterbalance the massively outsized power of this really very small community of individuals who just have massive amounts of wealth. And I know that that sounds kind of facile, but I really do think it's what we have to do, and historically it is how [people] have always combated the very wealthy and their fantasies of power. We can also point out when they're wrong. Say 'The emperor has no clothes, we are not going to Mars, and that is ridiculous.' Public ridicule of these ideas — informed, factually accurate public ridicule — is part of what I'm trying to do, and I think it's a really important and powerful tool. And then in the longer term — hopefully not that far away, if we get to a place where we have political power to balance these guys out — I think we've got to tax their wealth away. They did not earn that money alone. They needed the infrastructure and community that the rest of us provide and they also, frankly, needed a lot of government investment. They are the biggest welfare queens in existence, right? Silicon Valley got enormous amounts of government spending to benefit it over the years, both on infrastructure and in buying products and whatnot. The government built the internet. The government was the biggest client of Silicon Valley back when it was first starting up through buying computer chips for the space program. The government built the space program without which you wouldn't be able to have something like SpaceX. So I think it's time to stop giving them handouts and start saying, 'What we invested, the bill has come due.' Best of Rolling Stone Every Super Bowl Halftime Show, Ranked From Worst to Best The United States of Weed Gaming Levels Up

Sydney Morning Herald
11-06-2025
- Science
- Sydney Morning Herald
A terrifying tour of Silicon Valley's deluded plans for a techno-utopia
TECHNOLOGY More Everything Forever Adam Becker Basic Books, $34.99 In commercial space flight PR jargon, a catastrophic launch failure is routinely glossed over with seemingly innocuous explanations. The second stage booster did not 'explode, destroying millions of dollars' worth of engines and several satellites'. No, it 'underwent a rapid unplanned disassembly', the results of which were 'mission sub-optimal'. Ask a deckhand on a fishing trawler in the Gulf of Mexico what he saw, when yet another doomed Space X Starship detonated 10 kilometres above his head, and these are not words he would use. But to hear Elon Musk tell it, his Starship and its successors will soon be the workhorses of an interplanetary fleet that will take humans to Mars. And not just a few intrepid Apollo-style explorers collecting rocks and taking pictures, mind you. Musk and his disciples want to colonise the red planet, and by that, he means setting up an entire, self-sufficient civilisation there. Adam Becker's marvellous 'disassembly' of Musk's ludicrous fantasy is neither unplanned nor sub-optimal. Subtitled AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, More Everything Forever gives a trawler deckhand's eye view of what awaits us, should a deluded cabal of tech billionaires somehow make their dreams come true. The paving stones of these roads to hell are, seductively as ever, presented by advocates as nothing more sinister than the greatest good for the greatest number, and history is littered with the wreckage of such beneficent intentions. And so it is that Becker begins his descent into the AI underworld in the seemingly harmless realm of EA, or 'effective altruism'; the notion that it is incumbent on people with wealth to share it around, and so help those less fortunate. The catch here is the E before the A, and what began as a fairly harmless, if occasionally wacky, crusade in the minds of philosophers like Peter Singer can, and did, come utterly unhinged when the brutal logic of algorithms is used to maximise the 'effectiveness' of altruism. Particularly in the hands of the likes of cryptocurrency lunatics such as Sam Bankman-Fried, and the deluded Scottish philosopher William MacAskill. Bankman-Fried was very keen on EA, and he did give away a lot of money, but as most of it belonged to other people, he was convicted of fraud on a titanic scale in 2023 and consigned to the hoosegow for 25 years. But 25 years is a millisecond to MacAskill, an advocate of 'longtermism', a clumsy word which means that in order to ensure the greatest good for the greatest number, it's clear (to longtermists, anyway) that we need a lot more people to make happy. More people, indeed, than Earth's ecology could ever support.

The Age
11-06-2025
- Science
- The Age
A terrifying tour of Silicon Valley's deluded plans for a techno-utopia
TECHNOLOGY More Everything Forever Adam Becker Basic Books, $34.99 In commercial space flight PR jargon, a catastrophic launch failure is routinely glossed over with seemingly innocuous explanations. The second stage booster did not 'explode, destroying millions of dollars' worth of engines and several satellites'. No, it 'underwent a rapid unplanned disassembly', the results of which were 'mission sub-optimal'. Ask a deckhand on a fishing trawler in the Gulf of Mexico what he saw, when yet another doomed Space X Starship detonated 10 kilometres above his head, and these are not words he would use. But to hear Elon Musk tell it, his Starship and its successors will soon be the workhorses of an interplanetary fleet that will take humans to Mars. And not just a few intrepid Apollo-style explorers collecting rocks and taking pictures, mind you. Musk and his disciples want to colonise the red planet, and by that, he means setting up an entire, self-sufficient civilisation there. Adam Becker's marvellous 'disassembly' of Musk's ludicrous fantasy is neither unplanned nor sub-optimal. Subtitled AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, More Everything Forever gives a trawler deckhand's eye view of what awaits us, should a deluded cabal of tech billionaires somehow make their dreams come true. The paving stones of these roads to hell are, seductively as ever, presented by advocates as nothing more sinister than the greatest good for the greatest number, and history is littered with the wreckage of such beneficent intentions. And so it is that Becker begins his descent into the AI underworld in the seemingly harmless realm of EA, or 'effective altruism'; the notion that it is incumbent on people with wealth to share it around, and so help those less fortunate. The catch here is the E before the A, and what began as a fairly harmless, if occasionally wacky, crusade in the minds of philosophers like Peter Singer can, and did, come utterly unhinged when the brutal logic of algorithms is used to maximise the 'effectiveness' of altruism. Particularly in the hands of the likes of cryptocurrency lunatics such as Sam Bankman-Fried, and the deluded Scottish philosopher William MacAskill. Bankman-Fried was very keen on EA, and he did give away a lot of money, but as most of it belonged to other people, he was convicted of fraud on a titanic scale in 2023 and consigned to the hoosegow for 25 years. But 25 years is a millisecond to MacAskill, an advocate of 'longtermism', a clumsy word which means that in order to ensure the greatest good for the greatest number, it's clear (to longtermists, anyway) that we need a lot more people to make happy. More people, indeed, than Earth's ecology could ever support.
Yahoo
28-05-2025
- Politics
- Yahoo
A Reality Check for Tech Oligarchs
Technologists currently wield a level of political influence that was recently considered unthinkable. While Elon Musk's Department of Government Efficiency slashes public services, Jeff Bezos takes celebrities to space on Blue Origin and the CEOs of AI companies speak openly of radically transforming society. As a result, there has never been a better moment to understand the ideas that animate these leaders' particular vision of the future. In his new book, More Everything Forever, the science journalist Adam Becker offers a deep dive into the worldview of techno-utopians such as Musk—one that's underpinned by promises of AI dominance, space colonization, boundless economic growth, and eventually, immortality. Becker's premise is bracing: Tech oligarchs' wildest visions of tomorrow amount to a modern secular theology that is both mesmerizing and, in his view, deeply misguided. The author's central concern is that these grand ambitions are not benign eccentricities, but ideologies with real-world consequences. What do these people envision? In their vibrant utopia, humanity has harnessed technology to transcend all of its limits—old age and the finite bounds of knowledge most of all. Artificial intelligence oversees an era of abundance, automating labor and generating wealth so effectively that every person's needs are instantly met. Society is powered entirely by clean energy, while heavy industry has been relocated to space, turning Earth into a pristine sanctuary. People live and work throughout the solar system. Advances in biotechnology have all but conquered disease and aging. At the center of this future, a friendly AI—aligned with human values—guides civilization wisely, ensuring that progress remains tightly coupled with the flourishing of humanity and the environment. Musk, along with the likes of Bezos and OpenAI's CEO, Sam Altman, aren't merely imagining sci-fi futures as a luxury hobby—they are funding them, proselytizing for them, and, in a growing number of cases, trying to reorganize society around them. In Becker's view, the rich are not merely chasing utopia, but prioritizing their vision of the future over the very real concerns of people in the present. Impeding environmental research, for instance, makes sense if you believe that human life will continue to exist in an extraterrestrial elsewhere. More Everything Forever asks us to take these ideas seriously, not necessarily because they are credible predictions, but because some people in power believe they are. [Read: The rise of techno-authoritarianism] Becker, in prose that is snappy if at times predictable, highlights the quasi-spiritual nature of Silicon Valley's utopianism, which is based on two very basic beliefs. First, that death is scary and unpleasant. And second, that thanks to science and technology, the humans of the future will never have to be scared or do anything unpleasant. 'The dream is always the same: go to space and live forever,' Becker writes. (One reason for the interest in space is that longevity drugs, according to the tech researcher Benjamin Reinhardt, can be synthesized only 'in a pristine zero-g environment.') This future will overcome not just human biology but a fundamental rift between science and faith. Becker quotes the writer Meghan O'Gieblyn, who observes in her book God, Human, Animal, Machine that 'what makes transhumanism so compelling is that it promises to restore through science the transcendent—and essentially religious—hopes that science itself obliterated.' Becker demonstrates how certain contemporary technologists flirt with explicitly religious trappings. Anthony Levandowski, the former head of Google's self-driving-car division, for instance, founded an organization to worship artificial intelligence as a godhead. But Becker also reveals the largely forgotten precedents for this worldview, sketching a lineage of thought that connects today's Silicon Valley seers to earlier futurist prophets. In the late 19th century, the Russian philosopher Nikolai Fedorov preached that humanity's divine mission was to physically resurrect every person who had ever lived and settle them throughout the cosmos, achieving eternal life via what Fedorov called 'the regulation of nature by human reason and will.' The rapture once preached and beckoned in churches has been repackaged for secular times: In place of souls ascending to heaven, there are minds preserved digitally—or even bodies kept alive—for eternity. Silicon Valley's visionaries are, in this view, not all cold rationalists; many of them are dreamers and believers whose fixations constitute, in Becker's view, a spiritual narrative as much as a scientific one—a new theology of technology. Let's slow down: Why exactly is this a bad idea? Who wouldn't want 'perfect health, immortality, yada yada yada,' as the AI researcher Eliezer Yudkowsky breezily summarizes the goal to Becker? The trouble, Becker shows, is that many of these dreams of personal transcendence disregard the potential human cost of working toward them. For the tech elite, these are visions of escape. But, Becker pointedly writes, 'they hold no promise of escape for the rest of us, only nightmares closing in.' Perhaps the most extreme version of this nightmare is the specter of an artificial superintelligence, or AGI (artificial general intelligence). Yudkowsky predicts to Becker that a sufficiently advanced AI, if misaligned with human values, would 'kill us all.' Forecasts for this type of technology, once fringe, have gained remarkable traction among tech leaders, and almost always trend to the stunningly optimistic. Sam Altman is admittedly concerned about the prospects of rogue AI—he famously admitted to having stockpiled 'guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to'—but these worries don't stop him from actively planning for a world reshaped by AI's exponential growth. In Altman's words, we live on the brink of a moment in which machines will do 'almost everything' and trigger societal changes so rapid that 'the future can be almost unimaginably great.' Becker is less sanguine, writing that 'we just don't know what it will take to build a machine to do all the things a human can do.' And from his point of view, it's best that things remain that way. [Read: Silicon Valley braces for chaos] Becker is at his rhetorically sharpest when he examines the philosophy of 'longtermism' that underlies much of this AI-centric and space-traveling fervor. Longtermism, championed by some Silicon Valley–adjacent philosophers and the effective-altruism movement, argues that the weight of the future—the potentially enormous number of human (or post-human) lives to come—overshadows the concerns of the present. If preventing human extinction is the ultimate good, virtually any present sacrifice can and should be rationalized. Becker shows how today's tech elites use such reasoning to support their own dominance in the short term, and how rhetoric about future generations tends to mask injustices and inequalities in the present. When billionaires claim that their space colonies or AI schemes might save humanity, they are also asserting that only they should shape humanity's course. Becker observes that this philosophy is 'made by carpenters, insisting the entire world is a nail that will yield to their ministrations.' Becker's perspective is largely that of a sober realist doing his darnedest to cut through delusion, yet one might ask whether his argument occasionally goes too far. Silicon Valley's techno-utopian culture may be misguided in its optimism, but is it only that? A gentle counterpoint: The human yearning for transcendence stems from a dissatisfaction with the present and a creative impulse, both of which have driven genuine progress. Ambitious dreams—even seemingly outlandish ones—have historically spurred political and cultural transformation. Faith, too, has helped people face the future with optimism. It should also be acknowledged that many of the tech elite Becker critiques do show some awareness of ethical pitfalls. Not all (or even most) technologists are as blithe or blinkered as Becker sometimes seems to suggest. In the end, this is not a book that revels in pessimism or cynicism; rather, it serves as a call to clear-eyed humanism. In Becker's telling, tech leaders err not in dreaming big, but in refusing to reckon with the costs and responsibilities that come with their dreams. They preach a future in which suffering, scarcity, and even death can be engineered away, yet they discount the very real suffering here and now that demands our immediate attention and compassion. In an era when billionaire space races and AI hype dominate headlines, More Everything Forever arrives as a much-needed reality check. At times, the book is something more than that: a valuable meditation on the questionable stories we tell about progress, salvation, and ourselves. Article originally published at The Atlantic