logo
#

Latest news with #ExistentialRisk

The Biggest Existential Threat Calls For Philosophers, Not AI Experts
The Biggest Existential Threat Calls For Philosophers, Not AI Experts

Forbes

time7 hours ago

  • Forbes

The Biggest Existential Threat Calls For Philosophers, Not AI Experts

Google's former AI chief, Geoffrey Hinton distinguishes between two ways in which AI poses an ... More existential threat to humanity. (Photo by Jonathan NACKSTRAND / AFP) (Photo by JONATHAN NACKSTRAND/AFP) Geoffrey Hinton, Nobel laureate and former AI chief in Google, recently distinguished between two ways in which AI poses an existential threat to humanity. According to Hinton, the threat unfolds when: Hinton cites cyberattacks, creation of viruses, corruption of elections, and creation of echo chambers as examples of the first way AI poses an existential threat. And deadly autonomous weapons and superintelligent AI that realizes it doesn't need us and therefore decides to kill us as examples of the second. But there is a third existential threat that neither Hinton nor his AI peers seem to worry about. And contrary to their warnings, this third threat is eroding human existence without reaching any of the media headlines. The third way AI poses an existential threat to humanity unfolds when: The simplest definition of an existential threat is 'a threat to something's very existence'. But to know whether humanity's existence is threatened, we must know what it means to exist as a human. And the AI experts don't. Ever since Alan Turing refused to consider the question: 'Can machines think?', AI experts have deftly failed to define basic human traits such as thinking, consciousness and creativity. No one knows how to define these things, they say. And they are right. But they are wrong to use their lack of definitions as an excuse for not taking the question of what it means to be human seriously. And they add to the existential threat to humanity by using terms like human-level intelligence when talking about AI. German philosopher Martin Heidegger said that our relationship with technology puts us in constant ... More danger of losing touch with technology, reality, and ourselves. (Photo by Fritz Eschen / ullstein bild) What Existential Threat Really means Talking about when and how AI will reach human-level intelligence, or outsmart us altogether, without having any idea how to understand human thinking, consciousness, and creativity is not only optimistic. It also erodes our shared understanding of ourselves and our surroundings. And this may very well turn out to be the biggest existential threat of all: that we lose touch with our humanity. In his 1954 lecture, 'The Question Concerning Technology', German philosopher Martin Heidegger said that our relationship with technology puts us in constant danger of losing touch with technology, reality, and ourselves. Unless we get a better grip of what he called the essence of technology, he said we are bound to: When I interviewed Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge, for 'An AI Professor's Guide To Saving Humanity From Big Tech' last year, he agreed that Heidegger's prediction has proven to be frighteningly accurate. But instead of pointing to the essence of technology, he said that 'the people who are in control of the deployment of [technology] are perhaps the least socially intelligent people we have on the planet.' Whether that's why AI experts conveniently avoid talking about the third existential threat is not for me to say. But as long as we focus on them and their speculations about what it takes for machines to reach human-level intelligence, we are not focusing on ourselves and what it takes for us to exist and evolve as humans. Existential Philosophers On Existential Threats Unlike AI experts, founders, and developers, the existential philosophy that Heidegger helped pioneer has not received billions of dollars in annual investment since the 1950's. Quite the contrary. While the AI industry has exploded, the interest and investments in the humanities has declined worldwide. In other words, humanity has for decades invested heavily in understanding and developing artificial intelligence, while we have neglected to understand and develop ourselves as humans. But although existential philosophers like Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty have not received as large grants as their colleagues in computer science departments, they have contributed insights that are more helpful when it comes to understanding and dealing with the existential threats posed by AI. In Being and Nothingness, French philosopher Jean-Paul Sartre places human consciousness, or ... More no-thingness (néant), in opposition to being, or thingness (être). Sorbonne, à Paris, France, le 22 mai 1968. (Photo by Pierre BLOUZARD/Gamma-Rapho) Like different AI experts believe in different ways to reach human-level intelligence, different existential philosophers have different ways of describing human existence. But unlike AI experts, they don't consider the lack of definitions a problem. On the contrary, they consider the lack of definitions, theories and technical solutions an important piece in the puzzle of understanding what it means to be human. Existential philosophers have realized that consciousness, creativity, and other human qualities that we struggle to define, are not an expression of 'something', that is, a core, function, or feature that distinguishes us from animals and machines. Rather, they are an expression of 'nothing'. Unlike other creatures, we humans not only exist, we also question our existence. We ask why and for how long we will be here. We exist knowing that at some point we will cease to exist. That we are limited in time and space. And therefore have to ask why, how and with whom we live our lives. For existential philosophers, AI does not pose an existential threat to humanity because it might exterminate all humans. It poses an existential threat because it offers answers faster than humans can ask the questions that help them contemplate their existence. And when humans stop asking existential questions, they stop being human. AI Experts Agree: Existential Threats Call For Philosophy While existential philosophers insist on understanding the existential part of existential threats, AI experts skip the existential questions and go straight to the technical and political answers to how the threats can be contained. That's why we keep hearing about responsible AI and regulation: because that's the part that calls for technical expertise. That's the part where the AI experts are still needed. Demis Hassabis, CEO of Google DeepMind, recently called for new, great philosophers to understand ... More the implications of developments in AI. (Photo byfor TIME) AI experts know how to design and develop 'something', but they have no idea how to deal with 'nothing'. That's probably what Hinton realized when he retired to spend more time on what he described as 'more philosophical work.' That also seems to be what Demis Hassabis, CEO of Google DeepMind, suggests when he says that 'we need new great philosophers to come about to understand the implications of this.' And that's certainly what Nick Bostrom hinted at in my interview with him about his latest book, Deep Utopia, when he declared that some questions are 'beyond his pay grade'. What 20th-century existential philosophy teaches us is that we don't have to wait for the AI ​​experts to retire or for new great philosophers to emerge to deal with the existential threats posed by AI. All we have to do is remind ourselves and each other to ask how we want – and don't want – to live our lives before we trust AI to know the answer.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store