logo
Human-level AI is not inevitable. We have the power to change course

Human-level AI is not inevitable. We have the power to change course

The Guardian2 days ago
'Technology happens because it is possible,' OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb.
Altman captures a Silicon Valley mantra: technology marches forward inexorably.
Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity.
For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms.
Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course.
A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.'
For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before.
No technology is inevitable, not even something as tempting as AGI.
Some AI worriers like to point out the times humanity resisted and restrained valuable technologies.
Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s.
No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts.
Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans.
And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.'
It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements.
These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.'
There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts.
In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production.
The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913.
In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round.
Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere.
It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet).
But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy.
It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed.
Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want.
At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'.
Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile.
But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody?
We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing.
But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources.
Governments, on the other hand, aren't subject to the same financial pressures.
An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run.
Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand.
And while the world may feel fractious, rival nations have cooperated to surprising degrees.
The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'.
In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty.
On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion.
The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it.
Technology happens because people make it happen. We can choose otherwise.
Garrison Lovely is a freelance journalist
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How to Optimize Claude Code for Maximum Efficiency and Cost Savings
How to Optimize Claude Code for Maximum Efficiency and Cost Savings

Geeky Gadgets

time16 minutes ago

  • Geeky Gadgets

How to Optimize Claude Code for Maximum Efficiency and Cost Savings

What if the key to unlocking the full potential of your AI-powered workflows lies not in adding more tools but in optimizing the ones you already have? For developers and teams using Claude Code, the challenge isn't just about harnessing its capabilities—it's about making sure it operates with peak efficiency and precision. From cluttered context windows that bog down performance to excessive token usage inflating costs, even the most advanced systems can fall prey to inefficiencies. But here's the good news: with the right strategies, you can transform these bottlenecks into opportunities for streamlined, cost-effective, and highly accurate operations. The secret? Thoughtful optimization techniques that align Claude Code's performance with your goals. In this excellent guide AI Labs, provide actionable insights and proven techniques to elevate your Claude Code performance. Whether it's using semantic search tools to eliminate irrelevant data, mastering context management for cleaner workflows, or integrating powerful resources like Serena MCP, this guide is packed with strategies designed to help you achieve more with less. Along the way, you'll also discover how to monitor usage effectively, reduce unnecessary expenses, and tailor setups to your unique projects. By the end, you'll not only understand how to optimize Claude Code but also gain a deeper appreciation for its untapped potential. After all, sometimes the most profound improvements come from refining what's already in your hands. Claude Code Optimization Guide Key Challenges in Claude Code Performance Claude Code, while highly capable, can face performance bottlenecks that limit its effectiveness. These challenges often stem from: Cluttered Context Windows: Processing irrelevant files or code introduces noise, reducing accuracy and slowing down operations. Processing irrelevant files or code introduces noise, reducing accuracy and slowing down operations. Excessive Token Usage: Redundant data processing inflates costs and decreases overall efficiency. These issues highlight the need for targeted optimization techniques to ensure Claude Code operates at its full potential. Effective Optimization Techniques To address these challenges, several proven strategies can be implemented to enhance Claude Code's performance: Semantic Search: Use semantic search tools to retrieve only the most relevant information. This reduces noise, accelerates processing, and improves accuracy. Use semantic search tools to retrieve only the most relevant information. This reduces noise, accelerates processing, and improves accuracy. Context Management: Employ tools like Serena MCP to manage context windows effectively, making sure Claude focuses on pertinent data and avoids irrelevant information. Employ tools like Serena MCP to manage context windows effectively, making sure Claude focuses on pertinent data and avoids irrelevant information. Structured Indexing: Index your projects to provide Claude with a structured and organized context, minimizing redundant searches and enhancing overall performance. These techniques not only improve Claude Code's efficiency but also contribute to a more streamlined and cost-effective workflow. Claude Code Optimization Techniques Watch this video on YouTube. Here is a selection of other guides from our extensive library of content you may find of interest on Claude Code. Serena MCP: A Key Optimization Tool Serena MCP is a versatile and powerful resource for optimizing Claude Code. Its features are designed to enhance performance and simplify workflows: Semantic Search: Assists precise data retrieval, making sure Claude processes only relevant and necessary information. Assists precise data retrieval, making sure Claude processes only relevant and necessary information. Compatibility: Supports multiple MCP clients, including Cursor and Windsurf, making it adaptable to a variety of workflows and environments. Supports multiple MCP clients, including Cursor and Windsurf, making it adaptable to a variety of workflows and environments. Dashboard Tools: Provides real-time monitoring of logs, server operations, and performance metrics, offering valuable insights for ongoing optimization. By integrating Serena MCP into your workflow, you can significantly improve Claude Code's functionality and ensure it operates efficiently. Monitoring Usage for Efficiency Tracking usage is a critical component of maintaining efficiency and controlling costs. Tools like the Claude Code usage monitor offer several advantages: Track token consumption and message usage in real time. Monitor overall expenses to ensure cost-effective operations. Adjust workflows proactively to stay within usage limits and optimize resource allocation. Regularly monitoring usage metrics allows you to identify inefficiencies and make informed adjustments to improve performance and reduce unnecessary expenses. Installation and Setup Best Practices Proper installation and setup are foundational to achieving optimal performance with Claude Code. Follow these best practices to ensure a smooth setup process: Install and initialize tools like Serena MCP and usage monitors using detailed, step-by-step instructions. Customize installations to align with your specific projects and programming languages, making sure compatibility and seamless integration. Index your projects during the setup phase to provide Claude with a structured and organized context, enhancing its efficiency from the outset. Investing time in a thorough and tailored setup process will establish a strong foundation for improved performance and accuracy. Benefits of Optimizing Claude Code Implementing these optimization strategies offers a range of significant benefits: Improved Accuracy: By reducing clutter and focusing on relevant data, Claude delivers faster and more precise responses. By reducing clutter and focusing on relevant data, Claude delivers faster and more precise responses. Lower Costs: Streamlined workflows minimize unnecessary token consumption, leading to reduced expenses. Streamlined workflows minimize unnecessary token consumption, leading to reduced expenses. Enhanced User Experience: Optimized operations result in smoother workflows and better overall performance. These benefits not only enhance Claude Code's functionality but also improve your overall experience with the tool, making it a more reliable and effective resource for your projects. Additional Tips for Success To maximize the effectiveness of these strategies and tools, consider the following additional tips: Provide clear and concise instructions to Claude to minimize errors and improve the tool's overall performance. Ensure compatibility with your programming languages and project types to avoid integration issues and maintain seamless workflows. Regularly review and refine your optimization strategies to adapt to evolving project requirements and technological advancements. By addressing inefficiencies and using tools like semantic search and Serena MCP, you can unlock the full potential of Claude Code. These strategies not only improve performance and accuracy but also make Claude a more effective and dependable tool for achieving your project goals. Media Credit: AI LABS Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

More people than ever are using ChatGPT
More people than ever are using ChatGPT

The Independent

time16 minutes ago

  • The Independent

More people than ever are using ChatGPT

OpenAI's ChatGPT now processes approximately 2.5 billion user prompts daily, with 330 million originating from the US. This figure represents a significant increase from the 1 billion daily queries reported in December, highlighting the chatbot's rapid growth. Despite this surge, ChatGPT's daily prompt volume remains lower than Google 's estimated 8 to 13 billion daily searches. The AI chatbot has become the world's fifth-most-visited website, transforming internet usage and showing capabilities in summarising, reasoning, and answering complex questions. Studies indicate a shift in user behaviour, with Google's global visits declining while ChatGPT's visits have risen by 160 per cent in the past year, particularly among Gen Z.

‘Critical' alert to all Google, Microsoft & Spotify users over trap that drains account in secs – see worst-hit brands
‘Critical' alert to all Google, Microsoft & Spotify users over trap that drains account in secs – see worst-hit brands

The Sun

time17 minutes ago

  • The Sun

‘Critical' alert to all Google, Microsoft & Spotify users over trap that drains account in secs – see worst-hit brands

GOOGLE, Microsoft and Spotify have become "prime targets" for phishing attacks with users urged to be extra vigilant to scams. In some cases, users' bank accounts can be drained in seconds. 1 Security experts have issued a "critical alert" following the rise in cyber crime on the tech giants and said crooks are continuing to cause chaos online. And it seems some consumers are currently more at risk than others. According to a new alert from the security team at Check Point, Microsoft, Google, Apple and Spotify accounts are "prime targets" for phishing attacks. Microsoft users need to be the most concerned, with experts saying this brand comes out top for scams. Google is then second with Apple third. It's now "critical" that those using these popular platforms watch out for phishing attacks. This is where crooks send out messages pretending to be from one of these tech giants. "Phishing continues to be a powerful tool in the cyber criminal arsenal," Check Point explained. "In the second quarter of 2025, attackers doubled down on impersonating the world's most trusted brands — those that millions of people rely on every day. "From tech giants to streaming services and travel platforms, no digital brand is immune to being spoofed." One recent attack — spotted by Check Point — targeted Spotify users with an email claiming their account details needed updating. Once the link in the message was clicked, users were lured into a credential-harvesting trap via a website that looked just like Spotify's official sign-in page. Warning as Thanksgiving travelers face having their banking info and identities stolen - tips to protect yourself If fooled, details such as credit card number, address and telephone numbers could all be stolen. 'Cybercriminals continue to exploit the trust users place in well-known brands," said Omer Dembinsky, Data Research Manager at Check Point Software. "The resurgence of Spotify and the surge in travel-related scams, especially in light of the upcoming summer and school holiday travel, show how phishing attacks are adapting to user behaviour and seasonal trends. "Awareness, education, and security controls remain critical to reducing the risk of compromise.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store