logo
Human-level AI is not inevitable. We have the power to change course

Human-level AI is not inevitable. We have the power to change course

The Guardian4 days ago
'Technology happens because it is possible,' OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb.
Altman captures a Silicon Valley mantra: technology marches forward inexorably.
Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity.
For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms.
Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course.
A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.'
For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before.
No technology is inevitable, not even something as tempting as AGI.
Some AI worriers like to point out the times humanity resisted and restrained valuable technologies.
Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s.
No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts.
Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans.
And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.'
It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements.
These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.'
There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts.
In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production.
The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913.
In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round.
Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere.
It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet).
But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy.
It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed.
Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want.
At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'.
Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile.
But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody?
We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing.
But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources.
Governments, on the other hand, aren't subject to the same financial pressures.
An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run.
Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand.
And while the world may feel fractious, rival nations have cooperated to surprising degrees.
The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'.
In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty.
On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion.
The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it.
Technology happens because people make it happen. We can choose otherwise.
Garrison Lovely is a freelance journalist
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

New ChatGPT 5 Details Just Changed Everything
New ChatGPT 5 Details Just Changed Everything

Geeky Gadgets

time44 minutes ago

  • Geeky Gadgets

New ChatGPT 5 Details Just Changed Everything

What if the way we interact with technology was about to change forever? Imagine an AI so advanced it could seamlessly analyze text, interpret images, and even process audio or video—all within a single, unified system. OpenAI's upcoming ChatGPT 5 promises to do just that, combining innovative reasoning with multimodal capabilities to create an experience that's not only more intuitive but also more powerful than anything we've seen before. With the potential to transform industries, transform education, and tackle real-world challenges like healthcare inefficiencies and mental health concerns, GPT-5 isn't just another upgrade—it's a paradigm shift in how we think about artificial intelligence. But as with any breakthrough, its fantastic power comes with questions about ethics, safety, and societal impact. In this in-depth exploration, AI Grid uncovers the new features that sets ChatGPT 5 apart, from its unified model design to its focus on accessibility and problem-solving. You'll discover how this next-generation AI could empower small businesses, educators, and researchers to achieve goals once thought unattainable—all while addressing critical challenges like misinformation and over-reliance on technology. But the story doesn't end there. What does this leap forward mean for the future of AI, and how will it shape the way we live, work, and connect? The answers may surprise you, leaving us to reflect on the delicate balance between innovation and responsibility. ChatGPT 5 : Transforming AI Interaction Unified Model: A Seamless AI Experience ChatGPT 5's rumoured new unified model design integrates the reasoning capabilities of OpenAI's 'O' series with the multimodal features of the GPT series. This innovative approach eliminates the need for users to navigate between specialized AI systems, creating a streamlined and efficient experience. By supporting multiple modalities—such as text, images, and potentially audio or video—GPT-5 offers an intuitive interface that caters to users with varying levels of technical expertise. The unified framework is designed to make AI interactions more efficient, accessible, and user-friendly, allowing individuals and organizations to focus on their goals without being hindered by technical barriers. For example, a user could seamlessly transition between analyzing textual data and interpreting visual content within the same system, reducing the complexity of workflows and enhancing productivity. Breaking Barriers to Accessibility A central goal of ChatGPT 5 is to make advanced AI technology accessible to a broader audience, regardless of their technical background. By simplifying interactions and removing the need to navigate between multiple tools, GPT-5 enables users to use its capabilities with minimal expertise. This widespread access of AI technology has the potential to drive innovation across various sectors. Small businesses could use GPT-5 to develop personalized marketing strategies, automate customer service, or streamline operations. Educators might create dynamic, interactive learning materials that integrate text, visuals, and other media to enhance student engagement. By lowering the barriers to entry, GPT 5 enables individuals and organizations to unlock new opportunities, fostering creativity and efficiency in ways that were previously unattainable. New GPT-5 Details Explained Watch this video on YouTube. Expand your understanding of ChatGPT 5 with additional resources from our extensive library of articles. Enhanced Intelligence and Problem-Solving Although GPT-5 will not initially include experimental IMO gold-level mathematical reasoning, it is expected to surpass its predecessors in intelligence, planning, and tool integration. These advancements will enable the model to tackle complex tasks with greater precision and efficiency, making it a valuable asset across diverse domains. For instance: Healthcare providers could analyze extensive datasets to optimize patient care workflows, identify inefficiencies, and improve outcomes. Researchers might accelerate scientific discoveries by automating data analysis, generating hypotheses, and identifying patterns in vast amounts of information. These capabilities position GPT-5 as a powerful tool for addressing real-world challenges, from improving operational efficiency to advancing scientific research. Its enhanced problem-solving abilities could lead to significant breakthroughs in fields such as medicine, engineering, and environmental science. API Deployment: Tailored for Industry Needs OpenAI plans to release ChatGPT 5 through an API, offering scaled-down versions—such as mini and nano models—to accommodate a wide range of use cases. This flexible deployment strategy ensures that organizations of all sizes, from startups to large enterprises, can integrate GPT-5 into their workflows without overextending their resources. Industries such as finance, logistics, and customer service stand to benefit significantly from the efficiency and precision offered by GPT-5. For example, financial institutions could use the model to analyze market trends and optimize investment strategies, while logistics companies might streamline supply chain operations. Additionally, in developing economies, GPT-5 could provide cost-effective solutions to address resource constraints, fostering innovation and economic growth in underserved regions. Balancing Innovation with Mental Health Concerns While GPT-5 offers immense potential, it also raises important questions about its impact on mental health and societal dynamics. The increasing realism of AI interactions could lead to over-reliance on virtual companions, potentially exacerbating feelings of isolation. Furthermore, users may misinterpret AI-generated responses as authoritative, increasing the risk of misinformation. To address these concerns, OpenAI has committed to implementing safeguards, such as: Clear disclaimers to help users differentiate AI-generated content from human expertise. Ethical guidelines designed to prevent misuse and ensure responsible deployment of the technology. These measures aim to mitigate risks while maximizing the benefits of GPT-5. By prioritizing transparency and ethical considerations, OpenAI seeks to create a balanced approach that fosters trust and accountability in AI interactions. Global Implications and Safety Priorities GPT-5's potential extends far beyond individual users, offering scalable, AI-driven solutions that could transform global industries and communities. Sectors such as education, healthcare, and manufacturing stand to benefit from the model's ability to enhance efficiency, reduce costs, and drive innovation. For example, educators could use ChatGPT 5 to develop personalized learning experiences, while manufacturers might optimize production processes through advanced data analysis. However, this progress must be accompanied by robust safety measures to prevent misuse and unintended consequences. OpenAI has emphasized the importance of transparency, ongoing evaluations, and collaboration with stakeholders to address these risks. By prioritizing ethical considerations and safety protocols, the company aims to ensure that GPT-5's impact is both responsible and sustainable. Anticipated Release and Realistic Expectations OpenAI has announced that ChatGPT 5 is slated for release in the coming months. Early evaluations suggest significant improvements over current models, particularly in areas such as reasoning, multimodal integration, and user accessibility. However, OpenAI has also cautioned against overestimating the model's capabilities, emphasizing the importance of setting realistic expectations. This measured approach reflects OpenAI's commitment to delivering a reliable and impactful AI system. By focusing on practical applications and addressing potential challenges, GPT-5 is positioned to make a meaningful contribution to the advancement of artificial intelligence. Media Credit: TheAIGRID Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Tradition meets AI in Nishijinori weaving style from Japan's ancient capital
Tradition meets AI in Nishijinori weaving style from Japan's ancient capital

The Independent

time44 minutes ago

  • The Independent

Tradition meets AI in Nishijinori weaving style from Japan's ancient capital

Nishijinori, the intricate weaving technique for kimonos that dates back more than a thousand years in Japan's ancient capital of Kyoto, is getting a high-tech collaborator: artificial intelligence. The revered colorful weaving style associated with 'The Tale of Genji' of the 11th-century Heian era, has gone through its share of ups and downs. But its survival is more perilous than ever today, as demand for kimonos nose-dives among Japanese grappling with modernization. Hironori Fukuoka, the fourth-generation successor to his Nishijinori business, is determined to keep alive the art he's inherited, even if that means turning to AI. 'I want to leave to legacy what my father has left for me,' he said in his rickety shop in the Nishijin district of Kyoto, a city with statue-filled temples and sculpted gardens that never seems to change. 'I've been pondering how the art of Nishijinori can stay relevant to the needs of today,' said Fukuoka. Besides the AI project, Fukuoka is also working on using his weaving technique to make super-durable materials for fishing rods and aircraft. Where tradition and technology meet Giant looms clatter at his shop, called Fukuoka Weaving. The patterns on the gorgeous fabric, slowly turning out from the loom, are repetitive and geometric, which makes it conducive to translating into digital data. Deciding which hand-dyed color thread goes where to make the patterns is much like the on-or-off digital signals of a computer. Such similarity is what Fukuoka focuses on in exploring how AI might work for Nishijinori, with the help of Sony Computer Science Laboratories, an independent research arm of electronics and entertainment company Sony Corp. AI only makes suggestions for the designs and doesn't do any of the actual production work. But that doesn't bother Fukuoka or the researchers. 'Our research stems from the idea that human life gets truly enriched only if it has both what's newly innovated and what never changes,' said Jun Rekimoto, chief science officer at Sony CSL, which is also studying how AI can be used to document and relay the moves of a traditional Japanese tea ceremony. 'We don't believe AI can do everything. Nishijinori is a massive, complicated industry and so it starts with figuring out where AI can help out,' said Rekimoto, also a professor at the University of Tokyo. What has come of it is a startling but logical turn in thinking, fitting of the art adorning kimonos worn by Japan's imperial family. The AI was fed various Nishijinori patterns that already existed and instructed to come up with its own suggestions. One was a bold pattern of black and orange that seemed to evoke a tropical motif. Striking a balance To Fukuoka, some of AI's ideas are interesting but simply off. The difference between AI and the human effort is that the former can come up with multiple suggestions in a matter of seconds. Fukuoka immediately gravitates toward the one that uses a motif of a leaf to define the angular lines of a traditional pattern, something he says a human wouldn't have thought of. He finds that ingenious. The kimono the AI collaboration has produced is a luscious soft green, although it doesn't have a price tag and isn't in production yet. The weaving is carried out by the old-style machine under the guidance of the human artist in the traditional way. Nishijinori kimonos sell for as much as a million yen ($6,700). Many Japanese these days don't bother buying a kimono and may rent it for special occasions like weddings, if at all. Putting one on is an arduous, complicated affair, often requiring professional help, making kimonos even less accessible. A creative partnership Dr. Lana Sinapayen, associate researcher at Sony CSL, believes AI often gets assigned the creative, fun work, leaving tedious tasks to people, when it should be the other way around. 'That was my goal,' she said in an interview at Fukuoka Weaving, of her intent to use AI in assistant roles, not leadership positions. Digital technology can't automatically represent all the color gradations of Nishijinori. But AI can figure out how to best do that digitally, and it can also learn how the human artist fixes the patterns it has produced. Once that's all done, AI can tackle arduous tasks in a matter of seconds, doing a pretty good job, according to the researchers. Artificial intelligence is being used widely in factories, offices, schools and homes, because it can do tasks faster and in greater volume, and is usually quite accurate and unbiased, compared to human efforts. Its spread has been faster in the U.S. and other Western nations than in Japan, which tends to be cautious about change and prefers carefully made, consensus-based decisions. But the use of AI in arts and crafts is promising, such as text-to-image generative AI for the creation of visual images from text prompts, according to a study by Henriikka Vartiainen and Matti Tedre, who looked at the use of AI by craft educators in Finland. 'As computers have taken over many routine-like and boring tasks that were previously performed by people, the computer revolution has also been said to liberate time and offer new opportunities for human imagination and creativity,' they said.

HR chief Kristin Cabot resigns after viral 'kiss cam' video at Coldplay concert
HR chief Kristin Cabot resigns after viral 'kiss cam' video at Coldplay concert

ITV News

time2 hours ago

  • ITV News

HR chief Kristin Cabot resigns after viral 'kiss cam' video at Coldplay concert

The pair were seen embracing at the Coldplay concert earlier this month Credit: TikTok @InstaaGraace A US tech company's chief people officer has resigned after she was spotted embracing the company's chief executive at a Coldplay concert. Kristin Cabot, who oversaw Astronomer's human resources, was seen with the company's boss, Andy Byron, on the 'kiss cam' screen at Gillette Stadium in Foxborough, Massachusetts, earlier this month. The New York-based company confirmed to ITV News' US partner CNN on Thursday that she had resigned. Mr Byron had his arms around his colleague, but when they saw themselves on the big screen, the pair quickly separated and attempted to hide their faces. He resigned last week after being placed on leave, according to an earlier statement shared with CNN. 'Whoa, look at these two,' Coldplay frontman Chris Martin quipped at the time. 'Either they're having an affair or they're just very shy.' The video of the two quickly went viral online, along with alleged statements from the ex-CEO acknowledging the situation. Astronomer, however, said on LinkedIn that Mr Bryon had not put out any statement and 'reports saying otherwise are all incorrect.' That statement also addressed the misidentification of a third person seen in the viral clip. 'Astronomer is committed to the values and culture that have guided us since our founding. Our leaders are expected to set the standard in both conduct and accountability,' the statement also read. The data operations company, founded in 2018, acknowledged in a separate statement that 'awareness of our company may have changed overnight,' but its mission would continue to focus on addressing data and artificial intelligence problems. The company said Astronomer's co-founder and Chief Product Officer, Pete DeJoy, would serve as interim CEO. Mr Byron's LinkedIn account is no longer public, and he was removed from the company's leadership page following the announcement, which now lists co-founder DeJoy as CEO.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store