
Doge wants to replace our institutions with a tech utopia. It won't work
Doge's website dubiously claims $190bn in savings. The receipts show that they are less about efficiency than they are aimed at effective dissolution, a fate met by USAID, the federal agency responsible for distributing foreign assistance.
Don't be fooled. These brash new reductions are not just your garden-variety small-government crusades or culture-war skirmishes. This administration's war on institutions derives from the newfound power of Silicon Valley ideology – a techno-determinism that views each institution's function as potential raw material for capture by private digital platforms.
All the while, Elon Musk sold the White House on an 'AI-first strategy' for the US government. The recent executive order Removing Barriers to American Leadership in Artificial Intelligence mandates that barely tested Silicon Valley AI be jammed into the government's work. It directs agencies to use AI to 'lessen the burden of bureaucratic restrictions'. This is a thinly veiled attempt not just to reduce institutional activities; it's also a degradation play.
Doge makes plain an often misunderstood tension: Silicon Valley's final dream is a world without institutions. Since the rise of the internet, startups have long encouraged, and profited from, institutional decline. This anti-institutionalism goes back to the roots of computing. Charles Babbage's difference engine, central to modern computing, was built on technologies meant to control labor. It was a reflection of Babbage's belief that the highest intention of the factory manager was to reduce the skill and cognitive complexity of laborers' tasks. If the machine could manage production, humans – now smoothed-out automatons – would hardly need accompanying social protections, or even any governance at all.
In 1948, Norbert Wiener founded the discipline of cybernetics, 'the science of control and communications in the animal and machine'. This automated governance was eventually brought into direct competition with public institutions. The revolt against the state took many forms in the history of computing thereafter, from the libertarian California ideology ('information wants to be free') to the very idea that a new 'cyberspace' would be liberated from governments. Here the individual is an entrepreneur of the mind, able to instantly improve their lot without the mediating hand of the institutional form.
To get to the real heart of Doge's ideology, read The Cathedral and the Bazaar, Eric Raymond's manifesto on building open-source software. For Raymond, cathedrals are 'carefully crafted by individual wizards or small bands of mages working in splendid isolation'. This slow, deliberate work is no match for the networked and digitally enabled bazaar, where many software developers move fast by releasing early and often, delegate everything they can, and are open to the point of promiscuity. Something like scripture for computer engineers, Raymond's ideas soon jumped out of the network and into governance of the physical world, where all human organizations were scrutinized as the maligned 'cathedral'.
Entrepreneurs loved this idea, too. The management method known as the 'lean startup' is a lightweight program of data-driven optimizations designed to quickly scale businesses. Instead of human labor and judgment, lean startups use data and algorithms to experiment their way toward governance.
But there's a catch: a public institution is not supposed to be run like a digital startup. Silicon Valley may have carved out a niche in which its organizational philosophies mastered food delivery apps, AI girlfriends and money-laundering shitcoins, but the moment they take these methods to institutions entrusted with public welfare, they've lost the plot. Governments don't have customers – they care for citizens. If classical liberalism had the state and its many sovereign institutions, and neoliberalism had the divine hand of the free market, today's platform class elevates computation as the ultimate arbiter of truth. When presented with an institutional force, the platform class first asks: how could this be delivered by way of a digital platform?
Digital technology doesn't have to be this way. Good software can augment institutions, not be the rationale for their deletion. Building this future requires undoing Silicon Valley's pernicious opposition to the institutional form. By giving into the digital utopian's anti-institutionalism, we allowed them to reshape government according to their growth-at-all-costs logic.
If the newly empowered digital utopianism goes unchecked, we face a platform-archy where black-box AI makes decisions once adjudicated through democratic institutions. This isn't just a Silicon Valley efficiency fantasy; it's on the roadmap of every authoritarian who ever sniffed power.
Thankfully, the anti-Doge backlash was swift. The abrupt layoffs backfired, leading many Americans to fully understand just how much research and resources for advancing science, medicine and culture are tied to federal support.
In the private sector, since capital is no longer free after the federal government hiked interest rates in 2022, the growth of the big Silicon Valley platforms have almost completely stalled. In search of an answer, Silicon Valley is making a big bet on AI, overwhelming users with automated answers that hallucinate and mislead at every turn. It's becoming harder and harder for the average person to buy what the digital utopians are selling.
The response to this assault on our institutions might be a kind of Digital New Deal – a public plan for institutions in the AI era. This 21st-century economics must go well beyond solving for mass unemployment. Reconstructing the institutional foundations of public goods such as journalism, libraries and higher education requires more than just restoring the public funds stripped by Doge. It will require forceful assertions about their regulatory value in the face of a fully automated slop state. Governments come and go, but free and open institutions are critical to the functioning of democracy. If we make the mistake of misrecognizing digital platforms for public institutions, we will not easily reverse Doge's mistakes.
Mike Pepi is a technologist and author who has written widely about the intersection between culture and the Internet. His book, Against Platforms: Surviving Digital Utopia, was published by Melville House in 2025
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Independent
15 minutes ago
- The Independent
5 best record players, reviewed by an audio expert
There's something mesmerising about lowering a needle onto a spinning record. Vinyl record sales are booming, and if you're looking to start your vinyl journey, or you're thinking about upgrading an old turntable, I've found the best record players that won't break the bank. There's a plethora of budget-friendly record players available that don't feel like compromises – decks that blend classic appeal with just enough modern convenience to make everyday use a doddle. I've been spinning my way through a selection of the latest relatively affordable record players from a range of brands. They may differ in design, features, and finish, but all share one thing in common: they make listening to records straightforward and joyful. Some have fully automatic operation – just press 'start' and let the deck do the rest – which is ideal for those wary of tonearm technicalities. Others come equipped with Bluetooth, enabling you to wirelessly stream to compatible speakers or headphones, if that's more convenient than a traditional wired setup. You'll even find models with a USB output, making it possible to digitise your collection. I think the Audio-Technica AT-LP70XBT is by far the best option for most people, offering brilliant sound quality at a reasonable price. However, l've tested lots of other great options. Whether you're rekindling your love of vinyl or lighting the spark for the first time, I've taken a close listen to help you find the best record player for your needs. How we tested Each of the turntables on test was carefully assembled and connected to my trusted reference audio system, ensuring a consistent setup for every model. To get a true sense of musicality and character, I spun a wide range of vinyl: the atmospheric layers and vocal nuances of Björk, the sweeping dynamics of Strauss's orchestral works, and the brassy, chaotic joy of Stan Kenton's big band jazz. I used these records to assess these players by several criteria: Sound quality: The most important test of all. It's perhaps a little subjective, but my varied playlist helped reveal how each deck handled detail, rhythm, dynamics and stereo imaging – all qualities that matter whether you're a casual listener or a long-time collector. Resonance: Reasonance is any vibration that impacts sound quality, whether from the turntable itself or from the tone arm. I wanted record players that reduced this resonance. Tone arms: A tonearm holds the stylus, or the needle. I wanted a smooth, lightweight tone arm that was easy to place onto a vinyl without damaging the record. Cartridge: The cartridge houses the needle, also known as a stylus. It's how a record player reads a record. These can often be swapped between different turntables, but I wanted to check the quality of the included cartridges and ensure that they didn't skip. Tracking force: This is the amount of pressure a stylus needle puts onto the groove. I measured the tracking force of each deck and whether or not it could be adjusted to ensure perfect playback. Anti-skate settings: These settings stop the stylus from creating too much inward force as it moves towards the centre of the record, so I both looked out for and tested these settings Connection types: All the turntables included in our tried-and-tested lineup come with a built-in phono preamp, as well as the option of a traditional line out. Vinyl records produce a very low-level signal that needs special amplification and equalisation before it can be heard properly through speakers. Traditionally, this job was handled by a dedicated phono stage in a hi-fi amplifier, or a separate external unit, but a built-in phono preamp takes care of that for you. I wanted record players with as many ways to connect as possible. Why you can trust IndyBest reviews Steve May is a technology journalist with more than 30 years' experience, specialising in home audio, home cinema, TV, soundbars and personal audio. He writes for a variety of popular audio websites and publications. His reviews are based on real-world testing, and he will only recommend the handful of products he believes are worth your money. The best record players for 2025 are:


The Independent
15 minutes ago
- The Independent
New Hampshire's new law protecting gunmakers faces first test in court over Sig Sauer lawsuit
A new state law in New Hampshire that makes it harder to take gunmaker Sig Sauer to court is getting its first test before a judge on Monday. The 2-month-old law was created by the Republican-led Legislature in response to mounting lawsuits faced by the Newington-based manufacturer over its popular P320 pistol. The lawsuits say that the gun can go off without the trigger being pulled, an allegation Sig Sauer denies. Sig Sauer, which employs over 2,000 people in New Hampshire, said the gun is safe and the problem is user error. Several large, multi-plaintiff cases filed since 2022 in New Hampshire's federal court representing nearly 80 people accuse Sig Sauer of defective product design, marketing, and negligence, in addition to lawsuits filed in other states. Many of the plaintiffs are current and former law enforcement officers who say they were wounded by the gun. They say the P320 design requires an external mechanical safety, a feature that is optional. The most recent New Hampshire case, representing 22 plaintiffs in 16 states, was filed in March. It's the focus of Monday's hearing. The new law on product liability claims against Sig Sauer and other gun manufacturers covers the 'absence or presence' of the external safety and several other optional features. Claims can still be filed over manufacturing defects. Attorneys for Sig Sauer argue it should apply to the March case, even though the law didn't exist at the time. 'New Hampshire has a clearly articulated position against such claims being cognizable in this state,' they argue in court documents for breaking up the cases and transferring them to court districts where the plaintiffs live. Lawyers from a Philadelphia-based firm representing the plaintiffs, disagree, saying the law 'has zero implication' on the case and only applies to future lawsuits. New Hampshire was the chosen location because federal rules allow lawsuits against a company in its home state, the plaintiff's attorneys say. Those lawsuits have been assigned to one federal judge in Concord. Sig Sauer is trying to decentralize the case, they say. Sig Sauer has prevailed in some cases. It has appealed two recent multimillion-dollar verdicts against it, in Pennsylvania and Georgia. A judge recently allowed the Pennsylvania verdict to stand, but vacated $10 million in punitive damages awarded to the plaintiff.


The Guardian
16 minutes ago
- The Guardian
Human-level AI is not inevitable. We have the power to change course
'Technology happens because it is possible,' OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Altman captures a Silicon Valley mantra: technology marches forward inexorably. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms. Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures). Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong? Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course. A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.' For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before. No technology is inevitable, not even something as tempting as AGI. Some AI worriers like to point out the times humanity resisted and restrained valuable technologies. Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s. No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts. Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans. And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.' It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements. These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.' There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts. In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production. The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913. In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round. Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere. It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet). But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy. It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed. Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want. At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'. Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile. But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody? We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing. But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources. Governments, on the other hand, aren't subject to the same financial pressures. An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run. Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand. And while the world may feel fractious, rival nations have cooperated to surprising degrees. The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'. In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty. On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion. The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it. Technology happens because people make it happen. We can choose otherwise. Garrison Lovely is a freelance journalist