logo
Silicon Valley's Misguided Utopian Visions

Silicon Valley's Misguided Utopian Visions

The Atlantic2 days ago

Technologists currently wield a level of political influence that was recently considered unthinkable. While Elon Musk's Department of Government Efficiency slashes public services, Jeff Bezos takes celebrities to space on Blue Origin and the CEOs of AI companies speak openly of radically transforming society. As a result, there has never been a better moment to understand the ideas that animate these leaders' particular vision of the future.
In his new book, More Everything Forever, the science journalist Adam Becker offers a deep dive into the worldview of techno-utopians such as Musk—one that's underpinned by promises of AI dominance, space colonization, boundless economic growth, and eventually, immortality. Becker's premise is bracing: Tech oligarchs' wildest visions of tomorrow amount to a modern secular theology that is both mesmerizing and, in his view, deeply misguided. The author's central concern is that these grand ambitions are not benign eccentricities, but ideologies with real-world consequences​.
What do these people envision? In their vibrant utopia, humanity has harnessed technology to transcend all of its limits—old age and the finite bounds of knowledge most of all. Artificial intelligence oversees an era of abundance, automating labor and generating wealth so effectively that every person's needs are instantly met. Society is powered entirely by clean energy, while heavy industry has been relocated to space, turning Earth into a pristine sanctuary. People live and work throughout the solar system. Advances in biotechnology have all but conquered disease and aging. At the center of this future, a friendly AI—aligned with human values—guides civilization wisely, ensuring that progress remains tightly coupled with the flourishing of humanity and the environment.
Musk, along with the likes of Bezos and OpenAI's CEO, Sam Altman, aren't merely imagining sci-fi futures as a luxury hobby—they are funding them, proselytizing for them, and, in a growing number of cases, trying to reorganize society around them. In Becker's view, the rich are not merely chasing utopia, but prioritizing their vision of the future over the very real concerns of people in the present. Impeding environmental research, for instance, makes sense if you believe that human life will continue to exist in an extraterrestrial elsewhere. More Everything Forever asks us to take these ideas seriously, not necessarily because they are credible predictions, but because some people in power believe they are.
Becker, in prose that is snappy if at times predictable, highlights the quasi-spiritual nature of Silicon Valley's utopianism, which is based on two very basic beliefs. First, that death is scary and unpleasant. And second, that thanks to science and technology, the humans of the future will never have to be scared or do anything unpleasant. 'The dream is always the same: go to space and live forever,' Becker writes. (One reason for the interest in space is that longevity drugs, according to the tech researcher Benjamin Reinhardt, can be synthesized only 'in a pristine zero-g environment.') This future will overcome not just human biology but a fundamental rift between science and faith. Becker quotes the writer Meghan O'Gieblyn, who observes in her book God, Human, Animal, Machine that 'what makes transhumanism so compelling is that it promises to restore through science the transcendent—and essentially religious—hopes that science itself obliterated.'​
Becker demonstrates how certain contemporary technologists flirt with explicitly religious trappings. Anthony Levandowski, the former head of Google's self-driving-car division, for instance, founded an organization to worship artificial intelligence as a godhead ​. But Becker also reveals the largely forgotten precedents for this worldview, sketching a lineage of thought that connects today's Silicon Valley seers to earlier futurist prophets. In the late 19th century, the Russian philosopher Nikolai Fedorov preached that humanity's divine mission was to physically resurrect every person who had ever lived and settle them throughout the cosmos, achieving eternal life via what Fedorov called 'the regulation of nature by human reason and will.'
The rapture once preached and beckoned in churches has been repackaged for secular times: In place of souls ascending to heaven, there are minds preserved digitally—or even bodies kept alive—for eternity. Silicon Valley's visionaries are, in this view, not all cold rationalists; many of them are dreamers and believers whose fixations constitute, in Becker's view, a spiritual narrative as much as a scientific one—a new theology of technology.
Let's slow down: Why exactly is this a bad idea? Who wouldn't want 'perfect health, immortality, yada yada yada,' as the AI researcher Eliezer Yudkowsky breezily summarizes the goal to Becker? The trouble, Becker shows, is that many of these dreams of personal transcendence disregard the potential human cost of working toward them. For the tech elite, these are visions of escape. But, Becker pointedly writes, 'they hold no promise of escape for the rest of us, only nightmares closing in.'​
Perhaps the most extreme version of this nightmare is the specter of an artificial superintelligence, or AGI (artificial general intelligence). Yudkowsky predicts to Becker that a sufficiently advanced AI, if misaligned with human values, would 'kill us all.'​ Forecasts for this type of technology, once fringe, have gained remarkable traction among tech leaders, and almost always trend to the stunningly optimistic. Sam Altman is admittedly concerned about the prospects of rogue AI —he famously admitted to having stockpiled 'guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to'—but these worries don't stop him from actively planning for a world reshaped by AI's exponential growth. In Altman's words, we live on the brink of a moment in which machines will do 'almost everything' and trigger societal changes so rapid that 'the future can be almost unimaginably great.' Becker is less sanguine, writing that 'we just don't know what it will take to build a machine to do all the things a human can do.' And from his point of view, it's best that things remain that way.
Becker is at his rhetorically sharpest when he examines the philosophy of 'longtermism' that underlies much of this AI-centric and space-traveling fervor. Longtermism, championed by some Silicon Valley–adjacent philosophers and the effective-altruism movement, argues that the weight of the future—the potentially enormous number of human (or post-human) lives to come—overshadows the concerns of the present. If preventing human extinction is the ultimate good, virtually any present sacrifice can and should be rationalized. Becker shows how today's tech elites use such reasoning to support their own dominance in the short term, and how rhetoric about future generations tends to mask injustices and inequalities in the present​. When billionaires claim that their space colonies or AI schemes might save humanity, they are also asserting that only they should shape humanity's course. Becker observes that this philosophy is 'made by carpenters, insisting the entire world is a nail that will yield to their ministrations.'​
Becker's perspective is largely that of a sober realist doing his darnedest to cut through delusion, yet one might ask whether his argument occasionally goes too far. Silicon Valley's techno-utopian culture may be misguided in its optimism, but is it only that? A gentle counterpoint: The human yearning for transcendence stems from a dissatisfaction with the present and a creative impulse, both of which have driven genuine progress. Ambitious dreams—even seemingly outlandish ones—have historically spurred political and cultural transformation. Faith, too, has helped people face the future with optimism. It should also be acknowledged that many of the tech elite Becker critiques do show some awareness of ethical pitfalls. Not all (or even most) technologists are as blithe or blinkered as Becker sometimes seems to suggest.
In the end, this is not a book that revels in pessimism or cynicism; rather, it serves as a call to clear-eyed humanism. In Becker's telling, tech leaders err not in dreaming big, but in refusing to reckon with the costs and responsibilities that come with their dreams. They preach a future in which suffering, scarcity, and even death can be engineered away, yet they discount the very real suffering here and now that demands our immediate attention and compassion. In an era when billionaire space races and AI hype dominate headlines, More Everything Forever arrives as a much-needed reality check. At times, the book is something more than that: a valuable meditation on the questionable stories we tell about progress, salvation, and ourselves.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Blue Origin launch: How to watch Jeff Bezos' company's next human spaceflight from Texas
Blue Origin launch: How to watch Jeff Bezos' company's next human spaceflight from Texas

USA Today

time37 minutes ago

  • USA Today

Blue Origin launch: How to watch Jeff Bezos' company's next human spaceflight from Texas

Blue Origin launch: How to watch Jeff Bezos' company's next human spaceflight from Texas As six more people prepare to ride to the edge of space, the rest of us will have to content ourselves with watching the mission on solid ground. Fortunately, Blue Origin will provide a livestream. Show Caption Hide Caption Watch zero gravity inside Blue Origin space flight with Katy Perry Katy Perry, Gayle King and four other women experienced zero gravity as they reached the edge of space on Blue Origin. Blue Origin's next crewed launch, known as NS-32, could get off the ground as early as 9:30 a.m. ET Saturday, May 31. Blue Origin New Shepard rocket launches take place from the company's private ranch facility known as Launch Site One. The facility is located 30 miles north of the town of Van Horn. Blue Origin will provide a webcast of the launch beginning about 30 minutes ahead of the scheduled liftoff. Six more space tourists are preparing to join the growing ranks of civilians who have flown on a Blue Origin rocket to the edge of space. The vehicle they'll ride to orbit, Blue Origin's New Shepard, is the same one that 58 others have ridden to space on 11 previous human spaceflights from the company's west Texas facility. The New Shepard's maiden voyage in July 2021 with a crew even included billionaire Amazon founder Jeff Bezos, who also owns the space technology company. New Shepard is composed of both a rocket and a crew capsule, which is outfitted with large windows for its passengers to glimpse spectacular views that most of us will never see. The rest of us will instead have to content ourselves with watching the mission on solid ground. Fortunately, Blue Origin will provide a livestream for the launch and spaceflight. Here's everything to know about Blue Origin's next human spaceflight and how to watch a livestream. When is the next Blue Origin launch? What to know about New Shepard liftoff, crew What is Blue Origin? Billionaire Jeff Bezos, best known for founding Amazon, is the founder of the private space technology company Blue Origin. Bezos himself even boarded Blue Origin's New Shepard for its maiden crewed voyage in July 2021, which came after the spacecraft flew on 15 flight tests beginning in 2012. For nearly four years since its first crewed mission, the New Shepard spacecraft has served as a powerful symbol of Blue Origin's commercial spaceflight ambitions amid a growing space tourism industry. In addition to sending space tourists on brief joy rides to the edge of space, Blue Origin has also increasingly sought to compete with Elon Musk's SpaceX. Blue Origin's massive New Glenn rocket, which flew on its inaugural flight test in January, is also being developed for future spaceflights. At 320 feet tall, the spacecraft rivals SpaceX's 400-foot Starship in size. When is the next Blue Origin launch? Blue Origin's next crewed launch, known as NS-32, could get off the ground as early as 9:30 a.m. ET (8:30 a.m. CT) Saturday, May 31, the company announced announced Tuesday, May 27. Where are Blue Origin rocket launches? Blue Origin New Shepard rocket launches take place from the company's private ranch facility known as Launch Site One. The facility is located 30 miles north of the town of Van Horn in Culberson County – more than 140 miles east of El Paso near the U.S.-Mexico border. How to watch the Blue Origin New Shepard take off Blue Origin will provide a webcast of the launch beginning about 30 minutes ahead of the scheduled liftoff on its website. Who is going on the New Shepard? What to know about NS-32 Here's a look at the passengers on the next Blue Origin spaceflight: Aymette (Amy) Medina Jorge , a STEM teacher at Odyssey Academy in Galveston, Texas, who has led more than 60 zero-gravity space experiments. , a STEM teacher at Odyssey Academy in Galveston, Texas, who has led more than 60 zero-gravity space experiments. Gretchen Green , a radiologist specializing in women's imaging with more than 20 years of clinical experience. , a radiologist specializing in women's imaging with more than 20 years of clinical experience. Jaime Alemán , a Panamanian attorney, businessman and former ambassador to the United States. , a Panamanian attorney, businessman and former ambassador to the United States. Jesse Williams , a Canadian entrepreneur and adventurer who serves as CEO of Car History Group, which he founded in 2012. , a Canadian entrepreneur and adventurer who serves as CEO of Car History Group, which he founded in 2012. Mark Rocket , an entrepreneur from Christchurch, New Zealand, who is CEO of Kea Aerospace, which develops solar-powered UAVs for aerial imaging and monitoring, and president of Aerospace New Zealand. , an entrepreneur from Christchurch, New Zealand, who is CEO of Kea Aerospace, which develops solar-powered UAVs for aerial imaging and monitoring, and president of Aerospace New Zealand. Paul Jeris, a real estate developer and entrepreneur What happens during a New Shepard rocket launch? Each spaceflight on a New Shepard vehicle lasts about 11 minutes from liftoff to capsule touchdown. Named after astronaut Alan Shepard, the first American in space, the 60-foot-tall New Shepard rocket is topped with the gum drop-shaped crew capsule. During its ascent, the spacecraft reaches supersonic speeds surpassing 2,000 mph before the rocket booster separates from the crew capsule. At that point, those aboard the capsule become weightless as the spacecraft continues toward its highest point on its brief voyage above the Kármán Line – the 62-mile-high internationally recognized boundary of space. While experiencing a few minutes of microgravity, passengers have the opportunity to unstrap themselves from their seats to gaze out the capsule's large windows and take in a stunning view of Earth. Meanwhile, the rocket booster heads back to the ground while firing its engines and using its fins to slow and control its descent to land vertical about two miles from the launchpad. The capsule itself eventually begins what Blue Origin refers to as a "stable freefall' – plummeting back to Earth as three massive parachutes deploy and the capsule makes a soft landing in the desert, sending up plumes of dust. Eric Lagatta is the Space Connect reporter for the USA TODAY Network. Reach him at elagatta@

OpenAI Can Stop Pretending
OpenAI Can Stop Pretending

Atlantic

time40 minutes ago

  • Atlantic

OpenAI Can Stop Pretending

OpenAI is a strange company for strange times. Valued at $300 billion—roughly the same as seven Fords or one and a half PepsiCos—the AI start-up has an era-defining product in ChatGPT and is racing to be the first to build superintelligent machines. The company is also, to the apparent frustration of its CEO Sam Altman, beholden to its nonprofit status. When OpenAI was founded in 2015, it was meant to be a research lab that would work toward the goal of AI that is 'safe' and 'benefits all of humanity.' There wasn't supposed to be any pressure—or desire, really—to make money. Later, in 2019, OpenAI created a for-profit subsidiary to better attract investors—the types of people who might otherwise turn to the less scrupulous corporations that dot Silicon Valley. But even then, that part of the organization was under the nonprofit side's control. At the time, it had released no consumer products and capped how much money its investors could make. Then came ChatGPT. OpenAI's leadership had intended for the bot to provide insight into how people would use AI without any particular hope for widespread adoption. But ChatGPT became a hit, kicking 'off a growth curve like nothing we have ever seen,' as Altman wrote in an essay this past January. The product was so alluring that the entire tech industry seemed to pivot overnight into an AI arms race. Now, two and a half years since the chatbot's release, Altman says some half a billion people use the program each week, and he is chasing that success with new features and products—for shopping, coding, health care, finance, and seemingly any other industry imaginable. OpenAI is behaving like a typical business, because its rivals are typical businesses, and massive ones at that: Google and Meta, among others. Now 2015 feels like a very long time ago, and the charitable origins have turned into a ball and chain for OpenAI. Last December, after facing concerns from potential investors that pouring money into the company wouldn't pay off because of the nonprofit mission and complicated governance structure, the organization announced plans to change that: OpenAI was seeking to transition to a for-profit. The company argued that this was necessary to meet the tremendous costs of building advanced AI models. A nonprofit arm would still exist, though it would separately pursue 'charitable initiatives'—and it would not have any say over the actions of the for-profit, which would convert into a public-benefit corporation, or PBC. Corporate backers appeared satisfied: In March, the Japanese firm Softbank conditioned billions of dollars in investments on OpenAI changing its structure. Resistance came as swiftly as the new funding. Elon Musk—a co-founder of OpenAI who has since created his own rival firm, xAI, and seems to take every opportunity to undermine Altman— wrote on X that OpenAI 'was funded as an open source, nonprofit, but has become a closed source, profit-maximizer.' He had already sued the company for abandoning its founding mission in favor of financial gain, and claimed that the December proposal was further proof. Many unlikely allies emerged soon after. Attorneys general in multiple states, nonprofit groups, former OpenAI employees, outside AI experts, economists, lawyers, and three Nobel laureates all have raised concerns about the pivot, even petitioning to submit briefs to Musk's lawsuit. OpenAI backtracked, announcing a new plan earlier this month that would have the nonprofit remain in charge. Steve Sharpe, a spokesperson for OpenAI, told me over email that the new proposed structure 'puts us on the best path to' build a technology 'that could become one of the most powerful and beneficial tools in human history.' (The Atlantic entered into a corporate partnership with OpenAI in 2024.) Yet OpenAI's pursuit of industry-wide dominance shows no real signs of having hit a roadblock. The company has a close relationship with the Trump administration and is leading perhaps the biggest AI infrastructure buildout in history. Just this month, OpenAI announced a partnership with the United Arab Emirates and an expansion into personal gadgets—a forthcoming ' family of devices ' developed with Jony Ive, former chief design officer at Apple. For-profit or not, the future of AI still appears to be very much in Altman's hands. Why all the worry about corporate structure anyway? Governance, boardroom processes, legal arcana—these things are not what sci-fi dreams are made of. Yet those concerned with the societal dangers that generative AI, and thus OpenAI, pose feel these matters are of profound importance. The still more powerful artificial 'general' intelligence, or AGI, that OpenAI and its competitors are chasing could theoretically cause mass unemployment, worsen the spread of misinformation, and violate all sorts of privacy laws. In the highest-flung doomsday scenarios, the technology brings about civilizational collapse. Altman has expressed these concerns himself—and so OpenAI's 2019 structure, which gave the nonprofit final say over the for-profit's actions, was meant to guide the company toward building the technology responsibly instead of rushing to release new AI products, sell subscriptions, and stay ahead of competitors. 'OpenAI's nonprofit mission, together with the legal structures committing it to that mission, were a big part of my decision to join and remain at the company,' Jacob Hilton, a former OpenAI employee who contributed to ChatGPT, among other projects, told me. In April, Hilton and a number of his former colleagues, represented by the Harvard law professor Lawrence Lessig, wrote a letter to the court hearing Musk's lawsuit, arguing that a large part of OpenAI's success depended on its commitment to safety and the benefit of humanity. To renege on, or at least minimize, that mission was a betrayal. The concerns extend well beyond former employees. Geoffrey Hinton, a computer scientist at the University of Toronto who last year received a Nobel Prize for his AI research, told me that OpenAI's original structure would better help 'prevent a super intelligent AI from ever wanting to take over.' Hinton is one of the Nobel laureates who has publicly opposed the tech company's for-profit shift, alongside the economists Joseph Stiglitz and Oliver Hart. The three academics, joining a number of influential lawyers, economists, and AI experts, in addition to several former OpenAI employees, including Hilton, signed an open letter in April urging the attorneys general in Delaware and California—where the company's nonprofit was incorporated and where the company is headquartered, respectively—to closely investigate the December proposal. According to its most recent tax filing, OpenAI is intended to build AGI 'that safely benefits humanity, unconstrained by a need to generate financial return,' so disempowering the nonprofit seemed, to the signatories, self-evidently contradictory. In its initial proposal to transition to a for-profit, OpenAI still would have had some accountability as a public-benefit corporation: A PBC legally has to try to make profits for shareholders alongside pursuing a designated 'public benefit' (in this case, building 'safe' and 'beneficial' AI as outlined in OpenAI's founding mission). In its December announcement, OpenAI described the restructure as 'the next step in our mission.' But Michael Dorff, another signatory to the open letter and a law professor at UCLA who studies public-benefit corporations, explained to me that PBCs aren't necessarily an effective way to bring about public good. 'They are not great enforcement tools,' he said—they can 'nudge' a company toward a given cause but do not give regulators much authority over that commitment. (Anthropic and xAI, two of OpenAI's main competitors, are also public-benefit corporations.) OpenAI's proposed conversion also raised a whole other issue—a precedent for taking resources accrued under charitable intentions and repurposing them for profitable pursuits. And so yet another coalition, composed of nonprofits and advocacy groups, wrote its own petition for OpenAI's plans to be investigated, with the aim of preventing charitable organizations from being leveraged for financial gain in the future. Regulators, it turned out, were already watching. Three days after OpenAI's December announcement of the plans to revoke nonprofit oversight, Kathy Jennings, the attorney general of Delaware, notified the court presiding over Musk's lawsuit that her office was reviewing the proposed restructure to ensure that the corporation was fulfilling its charitable interest to build AI that benefits all of humanity. California's attorney general, Rob Bonta, was reviewing the restructure, as well. This ultimately led OpenAI to change plans. 'We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware,' Altman wrote in a letter to OpenAI employees earlier this month. The for-profit, meanwhile, will still transition to a PBC. The new plan is not yet a done deal: The offices of the attorneys general told me that they are reviewing the new proposal. Microsoft, OpenAI's closest corporate partner, has not yet agreed to the new structure. One could be forgiven for wondering what all the drama is for. Amid tension over OpenAI's corporate structure, the organization's corporate development hasn't so much as flinched. In just the past few weeks, the company has announced a new CEO of applications, someone to directly oversee and expand business operations; OpenAI for Countries, an initiative focused on building AI infrastructure around the world; and Codex, a powerful AI 'agent' that does coding tasks. To OpenAI, these endeavors legitimately contribute to benefiting humanity: building more and more useful AI tools; bringing those tools and the necessary infrastructure to run them to people around the world; drastically increasing the productivity of software engineers. No matter OpenAI's ultimate aims, in a race against Google and Meta, some commercial moves are necessary to stay ahead. And enriching OpenAI's investors and improving people's lives are not necessarily mutually exclusive. The greater issue is this: There is no universal definition for 'safe' or 'beneficial' AI. A chatbot might help doctors process paperwork faster and help a student float through high school without learning a thing; an AI research assistant could help climate scientists arrive at novel insights while also consuming huge amounts of water and fossil fuels. Whatever definition OpenAI applies will be largely determined by its board. Altman, in his May letter to employees, contended that OpenAI is on the best path 'to continue to make rapid, safe progress and to put great AI in the hands of everyone.' But everyone, in this case, has to trust OpenAI's definition of safe progress. The nonprofit has not always been the most effective check on the company. In 2023, the nonprofit board—which then and now had 'control' over the for-profit subsidiary— removed Altman from his position as CEO. But the company's employees revolted, and he was reinstated shortly thereafter with the support of Microsoft. In other words, 'control' on paper does not always amount to much in reality. Sharpe, the OpenAI spokesperson, said the nonprofit will be able to appoint and remove directors to OpenAI's separate for-profit board, but declined to clarify whether its board will be able to remove executives (such as the CEO). The company is 'continuing to work through the specific governance mandate in consultation with relevant stakeholders,' he said. Sharpe also told me that OpenAI will remove the cap on shareholder returns, which he said will satisfy the conditions for SoftBank's billions of dollars in investment. A top SoftBank executive has said 'nothing has really changed' with OpenAI's restructure, despite the nonprofit retaining control. If investors are now satisfied, the underlying legal structure is irrelevant. Marc Toberoff, a lawyer representing Musk in his lawsuit against OpenAI, wrote in a statement that 'SoftBank pulled back the curtain on OpenAI's corporate theater and said the quiet part out loud. OpenAI's recent 'restructuring' proposal is nothing but window dressing.' Lessig, the lawyer who represented the former OpenAI employees, told me that 'it's outrageous that we are allowing the development of this potentially catastrophic technology with nobody at any level doing any effective oversight of it.' Two years ago, Altman, in Senate testimony, seemed to agree with that notion: He told lawmakers that 'regulatory intervention by governments will be critical to mitigate the risks' of powerful AI. But earlier this month, only a few days after writing to his employees and investors that 'as AI accelerates, our commitment to safety grows stronger,' he told the Senate something else: Too much regulation would be 'disastrous' for America's AI industry. Perhaps—but it might also be in the best interests of humanity.

Tesla faces collapsing sales in Canada's Québec province, with new registrations tumbling 85%
Tesla faces collapsing sales in Canada's Québec province, with new registrations tumbling 85%

Business Insider

time42 minutes ago

  • Business Insider

Tesla faces collapsing sales in Canada's Québec province, with new registrations tumbling 85%

Tesla's sales woes have reached Canada. Data from the vehicle registration authority in the province of Québec shows a dramatic decline in Tesla registrations in the first quarter of 2025. Only 524 new Tesla vehicles were registered in Québec between January and March 2025, down over 85% from the 5,097 units logged in the final months of 2024. The company's top-selling Model Y saw the steepest drop in terms of pure numbers, falling from 3,274 units in the final quarter of 2024 to 360 in the first quarter of 2025. The Model 3, Tesla's cheapest car, plunged from 1,786 to just 96 units over the same period, a fall of 94%. While the drop is precipitous, it should be noted that auto sales are generally lower in the first quarter of the year than later in the year. Though confined to one region of Canada, the collapse mirrors similar issues in Europe, where Tesla sales fell by nearly 50% in April despite overall EV demand continuing to grow. In Québec, as in Europe, demand for electric vehicles remains strong, suggesting that Tesla's slump is less about market conditions and more about the brand itself. Rebate freeze and trade tensions Several factors appear to be converging. Tesla has been excluded from Canada's federal EV rebate program, with $43 million in rebates frozen and each individual claim now under review. Transport Minister Chrystia Freeland ordered the freeze in March following a last-minute surge in Tesla rebate applications — from 300 a day to nearly 5,800 — which triggered a probe into possible abuse. Freeland also said that Tesla would remain ineligible for future incentives as long as President Donald Trump's 25% tariffs on Canadian goods are in place. In parallel, provinces, including British Columbia, Prince Edward Island, and Manitoba, have removed Tesla from their rebate programs. Political backlash and brand damage Tesla's registration drop in Québec also comes amid a broader global backlash, especially in Europe, against CEO Elon Musk, who has endorsed a number of right-wing European political parties, including support for Germany's far-right AfD party and Britain's populist Reform UK party. In North America, Musk's role leading the Department of Government Efficiency has led to protests, boycotts, and vandalism of Tesla dealerships across at least a dozen states. Musk said this week he was stepping away from DOGE after months of involvement as a " special government employee." Federal law stipulates that those with this title cannot serve for more than 130 days in a 365-day period. Tesla's shares, which had come under pressure during Musk's DOGE stint, began rebounding in April after he announced he would step back from government work and " spend 24/7 at work" on his companies. In a Q&A published by Ars Technica on Tuesday, he said he'd been too involved in politics since wading into the 2024 presidential race last year — a campaign he heavily financed to the tune of nearly $300 million. In a sit-down with Bloomberg at the Qatar Economic Forum last week, he said he's no longer going to be spending big on politics, like he did in the 2024 election.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store