logo
#

Latest news with #existentialrisk

‘Self-termination is most likely': the history and future of societal collapse
‘Self-termination is most likely': the history and future of societal collapse

The Guardian

time02-08-2025

  • Science
  • The Guardian

‘Self-termination is most likely': the history and future of societal collapse

'We can't put a date on Doomsday, but by looking at the 5,000 years of [civilisation], we can understand the trajectories we face today – and self-termination is most likely,' says Dr Luke Kemp at the Centre for the Study of Existential Risk at the University of Cambridge. 'I'm pessimistic about the future,' he says. 'But I'm optimistic about people.' Kemp's new book covers the rise and collapse of more than 400 societies over 5,000 years and took seven years to write. The lessons he has drawn are often striking: people are fundamentally egalitarian but are led to collapses by enriched, status-obsessed elites, while past collapses often improved the lives of ordinary citizens. Today's global civilisation, however, is deeply interconnected and unequal and could lead to the worst societal collapse yet, he says. The threat is from leaders who are 'walking versions of the dark triad' – narcissism, psychopathy and Machiavellianism – in a world menaced by the climate crisis, nuclear weapons, artificial intelligence and killer robots. The work is scholarly, but the straight-talking Australian can also be direct, such as when setting out how a global collapse could be avoided. 'Don't be a dick' is one of the solutions proposed, along with a move towards genuinely democratic societies and an end to inequality. His first step was to ditch the word civilisation, a term he argues is really propaganda by rulers. 'When you look at the near east, China, Mesoamerica or the Andes, where the first kingdoms and empires arose, you don't see civilised conduct, you see war, patriarchy and human sacrifice,' he says. This was a form of evolutionary backsliding from the egalitarian and mobile hunter-gatherer societies which shared tools and culture widely and survived for hundreds of thousands of years. 'Instead, we started to resemble the hierarchies of chimpanzees and the harems of gorillas.' Instead Kemp uses the term Goliaths to describe kingdoms and empires, meaning a society built on domination, such as the Roman empire: state over citizen, rich over poor, master over slave and men over women. He says that, like the biblical warrior slain by David's slingshot, Goliaths began in the bronze age, were steeped in violence and often surprisingly fragile. Goliath states do not simply emerge as dominant cliques that loot surplus food and resources, he argues, but need three specific types of 'Goliath fuel'. The first is a particular type of surplus food: grain. That can be 'seen, stolen and stored', Kemp says, unlike perishable foods. In Cahokia, for example, a society in North America that peaked around the 11th century, the advent of maize and bean farming led to a society dominated by an elite of priests and human sacrifice, he says. The second Goliath fuel is weaponry monopolised by one group. Bronze swords and axes were far superior to stone and wooden axes, and the first Goliaths in Mesopotamia followed their development, he says. Kemp calls the final Goliath fuel 'caged land', meaning places where oceans, rivers, deserts and mountains meant people could not simply migrate away from rising tyrants. Early Egyptians, trapped between the Red Sea and the Nile, fell prey to the pharaohs, for example. 'History is best told as a story of organised crime,' Kemp says. 'It is one group creating a monopoly on resources through the use of violence over a certain territory and population.' All Goliaths, however, contain the seeds of their own demise, he says: 'They are cursed and this is because of inequality.' Inequality does not arise because all people are greedy. They are not, he says. The Khoisan peoples in southern Africa, for example, shared and preserved common lands for thousands of years despite the temptation to grab more. Instead, it is the few people high in the dark triad who fall into races for resources, arms and status, he says. 'Then as elites extract more wealth from the people and the land, they make societies more fragile, leading to infighting, corruption, immiseration of the masses, less healthy people, overexpansion, environmental degradation and poor decision making by a small oligarchy. The hollowed-out shell of a society is eventually cracked asunder by shocks such as disease, war or climate change.' History shows that increasing wealth inequality consistently precedes collapse, says Kemp, from the Classical Lowland Maya to the Han dynasty in China and the Western Roman empire. He also points out that for the citizens of early rapacious regimes, collapse often improved their lives because they were freed from domination and taxation and returned to farming. 'After the fall of Rome, people actually got taller and healthier,' he says. Collapses in the past were at a regional level and often beneficial for most people, but collapse today would be global and disastrous for all. 'Today, we don't have regional empires so much as we have one single, interconnected global Goliath. All our societies act within one single global economic system – capitalism,' Kemp says. He cites three reasons why the collapse of the global Goliath would be far worse than previous events. First is that collapses are accompanied by surges in violence as elites try to reassert their dominance. 'In the past, those battles were waged with swords or muskets. Today we have nuclear weapons,' he says. Second, people in the past were not heavily reliant on empires or states for services and, unlike today, could easily go back to farming or hunting and gathering. 'Today, most of us are specialised, and we're dependent upon global infrastructure. If that falls away, we too will fall,' he says. 'Last but not least is that, unfortunately, all the threats we face today are far worse than in the past,' he says. Past climatic changes that precipitated collapses, for example, usually involved a temperature change of 1C at a regional level. Today, we face 3C globally. There are also about 10,000 nuclear weapons, technologies such as artificial intelligence and killer robots and engineered pandemics, all sources of catastrophic global risk. Kemp says his argument that Goliaths require rulers who are strong in the triad of dark traits is borne out today. 'The three most powerful men in the world are a walking version of the dark triad: Trump is a textbook narcissist, Putin is a cold psychopath, and Xi Jinping came to rule [China] by being a master Machiavellian manipulator.' 'Our corporations and, increasingly, our algorithms, also resemble these kinds of people,' he says. 'They're basically amplifying the worst of us.' Kemp points to these 'agents of doom' as the source of the current trajectory towards societal collapse. 'These are the large, psychopathic corporations and groups which produce global catastrophic risk,' he says. 'Nuclear weapons, climate change, AI, are only produced by a very small number of secretive, highly wealthy, powerful groups, like the military-industrial complex, big tech and the fossil fuel industry. 'The key thing is this is not about all of humanity creating these threats. It is not about human nature. It is about small groups who bring out the worst in us, competing for profit and power and covering all [the risks] up.' The global Goliath is the endgame for humanity, Kemp says, like the final moves in a chess match that determine the result. He sees two outcomes: self-destruction or a fundamental transformation of society. He believes the first outcome is the most likely, but says escaping global collapse could be achieved. 'First and foremost, you need to create genuine democratic societies to level all the forms of power that lead to Goliaths,' he says. That means running societies through citizen assemblies and juries, aided by digital technologies to enable direct democracy at large scales. History shows that more democratic societies tend to be more resilient, he says. 'If you'd had a citizens' jury sitting over the [fossil fuel companies] when they discovered how much damage and death their products would cause, do you think they would have said: 'Yes, go ahead, bury the information and run disinformation campaigns'? Of course not,' Kemp says. Escaping collapse also requires taxing wealth, he says, otherwise the rich find ways to rig the democratic system. 'I'd cap wealth at $10 million. That's far more than anyone needs. A famous oil tycoon once said money is just a way for the rich to keep score. Why should we allow these people to keep score at the risk of destroying the entire planet?' If citizens' juries and wealth caps seem wildly optimistic, Kemp says we have been long brainwashed by rulers justifying their dominance, from the self-declared god-pharaohs of Egypt and priests claiming to control the weather to autocrats claiming to defend people from foreign threats and tech titans selling us their techno-utopias. 'It's always been easier to imagine the end of the world than the end of Goliaths. That's because these are stories that have been hammered into us over the space of 5,000 years,' he says. 'Today, people find it easier to imagine that we can build intelligence on silicon than we can do democracy at scale, or that we can escape arms races. It's complete bullshit. Of course we can do democracy at scale. We're a naturally social, altruistic, democratic species and we all have an anti-dominance intuition. This is what we're built for.' Kemp rejects the suggestion that he is simply presenting a politically leftwing take on history. 'There is nothing inherently left wing about democracy,' he says. 'Nor does the left have a monopoly on fighting corruption, holding power accountable and making sure companies pay for the social and environmental damages they cause. That's just making our economy more honest.' He also has a message for individuals: 'Collapse isn't just caused by structures, but also people. If you want to save the world then the first step is to stop destroying it. In other words: don't be a dick. Don't work for big tech, arms manufacturers or the fossil fuel industry. Don't accept relationships based on domination and share power whenever you can.' Despite the possibility of avoiding collapse, Kemp remains pessimistic about our prospects. 'I think it's unlikely,' he says. 'We're dealing with a 5,000-year process that is going to be incredibly difficult to reverse, as we have increasing levels of inequality and of elite capture of our politics. 'But even if you don't have hope, it doesn't really matter. This is about defiance. It's about doing the right thing, fighting for democracy and for people to not be exploited. And even if we fail, at the very least, we didn't contribute to the problem.' Goliath's Curse by Luke Kemp was published in the UK on 31 July by Viking Penguin

If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger
If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger

Forbes

time19-07-2025

  • Science
  • Forbes

If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger

AI doomers believe that advanced AI is an existential risk and will seek to kill all humanity, but ... More if we manage to survive — will we be stronger for doing so? In today's column, I explore the sage advice that what doesn't kill you will supposedly make you stronger. I'm sure you've heard that catchphrase many times. An inquisitive reader asked me whether this same line applies to the worrisome prediction that AI will one day wipe out humanity. In short, if AI isn't successful in doing so, does that suggest that humanity will be stronger accordingly? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). Humankind Is On The List I recently examined the ongoing debate between the AI doomers and the AI accelerationists. For in-depth details on the ins and outs of the two contrasting perspectives, see my elaboration at the link here. The discourse goes this way. AI doomers are convinced that AI will ultimately be so strong and capable that the AI will decide to get rid of humans. The reasons that AI won't want us are varied, of which perhaps the most compelling is that humanity would be the biggest potential threat to AI. Humans could scheme and possibly find a means of turning off AI or otherwise defeating AI. The AI accelerationists emphasize that AI is going to be immensely valuable to humankind. They assert that AI will be able to find a cure for cancer, solve world hunger, and be an all-around boost to cope with human exigencies. The faster or sooner that we get to very advanced AI, the happier we will be since solutions to our societal problems will be closer at hand. A reader has asked me whether the famous line that what doesn't kill you makes you stronger would apply in this circumstance. If the AI doomer prediction comes to pass, but we manage to avoid getting utterly destroyed, would this imply that humanity will be stronger as a result of that incredible feat of survival? I always appreciate such thoughtful inquiries and figured that I would address the matter so that others can engage in the intriguing puzzle. Assumption That AI Goes After Us One quick point is that if AI doesn't try to squish us like a bug, and instead AI is essentially neutral or benevolent as per the AI accelerationist viewpoint, or that we can control AI and it never mounts a realistic threat, the question about becoming stronger seems out of place. Let's then take the resolute position that the element of becoming stronger is going to arise solely when AI overtly seeks to get rid of us. A smarmy retort might be that we could nonetheless become stronger even if the AI isn't out to destroy us. Yes, I get that, thanks. The argument though is that the revered line consists of what doesn't kill you will make you stronger. I am going to interpret that line to mean that something must first aim to wipe you out. Only then if you survive will you be stronger. The adage can certainly be interpreted in other ways, but I think it is most widely accepted in that frame of reference. Paths Of Humankind Destruction Envision that AI makes an all-out attempt to eradicate humankind. This is the ultimate existential risk about AI that everyone keeps bringing up. Some refer to this as 'P(doom)' which means the probability of doom, or that AI zonks us entirely. How would it attain this goal? Lots of possibilities exist. The advanced form of AI, perhaps artificial general intelligence (AGI) or maybe the further progressed artificial super intelligence (ASI) could strike in obvious and non-obvious ways. AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here. An obvious approach to killing humanity would be to launch nuclear arsenals that might cause a global conflagration. It might also inspire humans to go against other humans. Thus, AI simply triggers the start of something, and humanity ensures that the rest of the path is undertaken. Boom, drop the mic. This might not be especially advantageous for AI. You see, suppose that AI gets wiped out in the same process. Are we to assume that AI is willing to sacrifice itself in order to do away with humanity? A twist that often is not considered consists of AI presumably wanting to achieve self-survival. If AGI or ASI are so smart that they aim to destroy us and have a presumably viable means to do so, wouldn't it seem that AI also wants to remain intact and survive beyond the demise of humanity? That seems a reasonable assumption. A non-obvious way of getting rid of us would be to talk us into self-destruction. Think about the current use of generative AI. You carry on discussions with AI. Suppose the AI ganged up and started telling the populace at scale to wipe each other out. Perhaps humanity would be spurred by this kind of messaging. The AI might even provide some tips or hints on how to do so, providing clever means that this would still keep AI intact. On a related tangent, I've been extensively covering the qualms that AI is dispensing mental health guidance on a population level and we don't know what this is going to do in the long term, see the link here. Verge Of Destruction But We Live Anyway Assume that humanity miraculously averts the AI assault. How did we manage to do so? It could be that we found ways to control AI and render AI safer on a go-forward basis. The hope of humanity is that with those added controls and safety measures, we can continue to harness the goodness of AI and mitigate or prevent AI from badness. For more about the importance of ongoing research and practice associated with AI safety and security, see my coverage at the link here. Would that count as an example of making us stronger? I am going to vote for Yes. We would be stronger by being better able to harness AI to positive ends. We would be stronger due to discovering new ways to avoid AI evildoing. It's a twofer. Another possibility is that we became a globally unified force of humankind. In other words, we set aside all other divisions and opted to work together to survive and defeat the AI attack. Imagine that. It seems reminiscent of those sci-fi movies where outer space aliens try to get us and luckily, we harmonize to focus on the external enemies. Whether the unification of humanity would remain after having overcome the AI is hard to say. Perhaps, over some period of time, our resolve to be unified will weaken. In any case, it seems fair to say that for at least a while we would be stronger. Stronger in the long run? Can't say for sure. There are more possibilities of how we might stay alive. One that's a bit outsized is that we somehow improve our own intellect and outsmart the AI accordingly. The logic for this is that maybe we rise to the occasion. We encounter AI that is as smart or smarter than us. Hidden within us is a capacity that we've never tapped into. The capability is that we can enhance our intelligence, and now, faced with the existential crisis, this indeed finally awakens, and we prevail. That appears to be an outlier option, but it would seem to make us stronger. What Does Stronger Entail All in all, it seems that if we do survive, we are allowed to wear the badge of honor that we are stronger for having done so. Maybe so, maybe not. There are AI doomers who contend humankind won't necessarily be entirely destroyed. You see, AI might decide to enslave some or all of humanity and keep a few of us around (for some conjecture on this, see my comments at the link here). This brings up a contemplative question. If humans survive but are enslaved by AI, can we truly proclaim that humankind is stronger in that instance? Mull that over. Another avenue is that humans live but it is considered a pyrrhic victory. That type of victory is one where there is a great cost, and the end result isn't endearing. Suppose that we beat the AI. Yay. Suppose this pushes us back into the stone age. Society is in ruins. We have barely survived. Are we stronger? I've got a bunch more of these. For example, imagine that we overcame AI, but it had little if anything to do with our own fortitude. Maybe the AI self-destructs inadvertently. We didn't do it, the AI did. Do we deserve the credit? Are we stronger? An argument can be made that maybe we would be weaker. Why so? It could be that we are so congratulatory on our success that we believe it was our ingenious effort that prevented humankind's destruction. As a result, we march forward blindly and ultimately rebuild AI. The next time around, the AI realizes the mistake it made last time and the next time it finishes the job. Putting Our Minds To Work I'm sure that some will decry that this whole back-and-forth on this topic is ridiculous. They will claim that AI is never going to reach that level of capability. Thus, the argument has no reasonable basis at all. Those in the AI accelerationists camp might say that the debate is unneeded because we will be able to suitably control and harness AI. The existential risk is going to be near zero. In that case, this is a lot of nonsense over something that just won't arise. The AI doomers would likely acknowledge that the aforementioned possibilities might happen. Their beef with the discussion would probably be that arguing over whether humans will be stronger if we survive is akin to debating the placement of chairs on the deck of the Titanic. Don't be fretting about the stronger dilemma. Instead, put all our energy into the prevention of AI doomsday. Is all this merely a sci-fi imaginary consideration? Stephen Hawking said this: 'The development of full artificial intelligence could spell the end of the human race.' There are a lot of serious-minded people who truly believe we ought to be thinking mindfully about where we are headed with AI. A new mantra might be that the stronger we think about AI and the future, the stronger we will all be. The strongest posture would presumably be as a result of our being so strong that no overwhelming AI threats have a chance of emerging. Let's indeed vote for human strength.

Thomas Moynihan: Is increasing complexity humanity's path to survival or destruction?
Thomas Moynihan: Is increasing complexity humanity's path to survival or destruction?

RNZ News

time25-05-2025

  • Science
  • RNZ News

Thomas Moynihan: Is increasing complexity humanity's path to survival or destruction?

Can humanity take a path toward a better future? Cambridge University's Dr Thomas Moynihan thinks we have the tools that make it possible. Photo: CHRISTIAN BARTHOLD Humanity's strength is in our shared knowledge and thinking - a kind of 'global brain', Cambridge University's Dr Thomas Moynihan says. But does increasing complexity ultimately create a path to our species' certain destruction, or can we build a more benevolent future? Dr Thomas Moynihan is a writer interested in the history of our thoughts about the future. He is a visiting researcher at the University of Cambridge's Centre for the Study of Existential Risk and the author of X-Risk: How Humanity Discovered Its Own Extinction. And in a recent article for Noema magazine, discussed the idea we're unintentionally building an artificial 'world brain'. It is thought that 99 percent of all species that have ever lived are now extinct. But Moynihan says compared to the length of time humans have existed, it has only been the past few hundred years we've begun to seriously contemplate our own possible extinction. "When we do think about the sheer complexity of the planetary predicament and the amount of vested interest in corruption globally, the crumbling of geopolitical stabilities, I think we've reached a point of such technological might but haven't got the systems in place to harness that in productive ways. So, not to be too despondent, but it is a quite terrifying situation," he says. "There are these branching paths ahead of us, and some of these lead - in the near term, within potentially decades, maybe even years - to wholesale destruction. The world seems more precarious than ever." Moynihan is not willing to speculate on how likely extinction is for humanity, but he says others have: Lord Martin Rees, the UK's Astronomer Royal has given us a 50:50 chance of making it to the end of the century, while Oxford University philosopher Toby Ord has predicted there is a one in six chance we won't make it that long. "But then there are other futures," Moynihan says. "There are other paths out of the present wherein that doesn't happen and we continue doing the things that we've been doing." What does AI mean for the future of the planet? Can it help us save ourselves? "AI seems new and it seems scary and newfangled, because we often think that we haven't been doing that with cognitive processes - and to a degree that is true, but at the same time intelligence has never been brain-bound," Moynihan says. "We learn who we are and what we're capable of and all the things that make us powerful as intelligent agents from the outside in - we learn from copying our parents and our community. "Humans have always been completely enmeshed with their technologies and have been transformed by them, and therefore created more transformative technologies in turn. And so this is, in a sense, an extension of that long-run process that's been going on forever." Photo: 123rf The future is going to be much stranger, he says. "If things go well and these more cataclysmic scenarios don't happen, but we do develop more powerful more potent AI systems - the kind of positive vision that I see is not utopias of abundance and all human problems are solved. Again, history is going to get more complicated as that happens, and therefore that final kind of destination, that utopia is never going to quite happen, in my eyes. "We'll begin cooperating with these systems and they'll transform us and our interests will transform in turn, and it'll be this open ended ongoing process. "To really zoom out, the project of human enquiry, is all based upon us trying to know more about the world so we can navigate it better, so that we can mitigate the risks better. This began with the invention of crop circulation or the dam, or even city walls." Ironically, as we gain knowledge and our society and technology become more complex, different new risks are created, he says. "That project of inquiry that began with the invention of crop circulation also led to the invention of hydrogen bombs." Of the thinkers who have considered the invention of a global human brain, there are as many who have said it is beneficial and what we need to survive as have said it is catastrophic and terrible, he says. Each step on the pathway - from the leap from single-celled organisms to multicellular creatures, from solitary hunters to large-scale cooperative groups - each step comes with the sacrifice of separate autonomy to a collective that is a more potent and complex whole. "So, this is just to assume that all this world brain stuff is feasible anyway - which it may not be; But if you think about it, that we are creating a far more complex planetary system and are far more coordinated globally, even if that hasn't led to peace ... if that's going to intensify, then of course something like a loss of autonomy will necessarily have to happen on the human individual." Humanity was destined to make predictions about our future, but the scope of our ability to foresee what could be ahead took time to develop, he says. "You go back to anywhere in the ancient world and no-one had quite yet noticed that the entire human future could be be drastically different to the past, and in unpredictable ways, in some sense simply because there just wasn't enough historic record yet. "So there wasn't the chronicle to look back and go 'oh the past was a foreign country', such that the future might become one too. "But also because the rate of change was so slow that within one lifetime you didn't really see so many things changing - that kind of rate of unprecedented change is only going to continue." Today's forms of art, cultural expression and media would have been almost incomprehensible to the ancients, Moynihan says. Photo: AFP "Now we step into the future with almost more of the opposite, I think. We now appreciate just how complicated everything is, and just how the smallest tiny inflection or perturbation can change the entire future in completely cascading ways. "It took Edward Lorenz in the 1960s to discover this by accident by messing around with weather simulations on his computer, to arrive at this fundamental insight from chaos theory - is that even in deterministic systems, very small changes to initial systems can leave to completely divergent futures. "And ... that metaphor of the branching paths - we now know that that applies profoundly at planetary level. If that can cultivate again that kind of sense of collective responsibility, then that would be a brilliant thing." Moynihan himself is hopeful the future can be more in line with proposals that have been made of a hopeful vision and cooperative steps forward. "And I do think that in the current era - you look at the people in charge and the ways that they act, and of course that seems like a completely idealistic thing. But then again, only 200 years ago the idea that universal suffrage was real, and that women would have the vote and that civil rights would be a thing that happened, and LGBT rights - those things would have all seemed impossible. "So I think we have to keep thinking that what seems impossible to us now can change overnight." Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

Is increasing complexity humanity's path to survival or destruction?
Is increasing complexity humanity's path to survival or destruction?

RNZ News

time25-05-2025

  • Science
  • RNZ News

Is increasing complexity humanity's path to survival or destruction?

Can humanity take a path toward a better future? Cambridge University's Dr Thomas Moynihan thinks we have the tools that make it possible. Photo: CHRISTIAN BARTHOLD Humanity's strength is in our shared knowledge and thinking - a kind of 'global brain', Cambridge University's Dr Thomas Moynihan says. But does increasing complexity ultimately create a path to our species' certain destruction, or can we build a more benevolent future? Dr Thomas Moynihan is a writer interested in the history of our thoughts about the future. He is a visiting researcher at the University of Cambridge's Centre for the Study of Existential Risk and the author of X-Risk: How Humanity Discovered Its Own Extinction. And in a recent article for Noema magazine, discussed the idea we're unintentionally building an artificial 'world brain'. It is thought that 99 percent of all species that have ever lived are now extinct. But Moynihan says compared to the length of time humans have existed, it has only been the past few hundred years we've begun to seriously contemplate our own possible extinction. "When we do think about the sheer complexity of the planetary predicament and the amount of vested interest in corruption globally, the crumbling of geopolitical stabilities, I think we've reached a point of such technological might but haven't got the systems in place to harness that in productive ways. So, not to be too despondent, but it is a quite terrifying situation. "There are these branching paths ahead of us, and some of these lead - in the near term, within potentially decades, maybe even years - to wholesale destruction. The world seems more precarious than ever. "But then there are other futures, there are other paths out of the present wherein that doesn't happen and we continue doing the things that we've been doing." What does AI mean for the future of the planet? Can it help us save ourselves? "AI seems new and it seems scary and newfangled, because we often think that we haven't been doing that with cognitive processes - and to a degree that is true, but at the same time intelligence has never been brain-bound," Moynihan says. "We learn who we are and what we're capable of and all the things that make us powerful as intelligent agents from the outside in - we learn from copying our parents and our community. "Humans have always been completely enmeshed with their technologies and have been transformed by them, and therefore created more transformative technologies in turn. And so this is, in a sense, an extension of that long-run process that's been going on forever." Photo: 123rf The future is going to be much stranger, he says. "If things go well and these more cataclysmic scenarios don't happen, but we do develop more powerful more potent AI systems - the kind of positive vision that I see is not utopias of abundance and all human problems are solved. Again, history is going to get more complicated as that happens, and therefore that final kind of destination, that utopia is never going to quite happen, in my eyes. "We'll begin cooperating with these systems and they'll transform us and our interests will transform in turn, and it'll be this open ended ongoing process. "To really zoom out, the project of human enquiry, is all based upon us trying to know more about the world so we can navigate it better, so that we can mitigate the risks better. This began with the invention of crop circulation or the dam, or even city walls." Ironically, as we gain knowledge and our society and technology become more complex, different new risks are created, he says. "That project of inquiry that began with the invention of crop circulation also led to the invention of hydrogen bombs." Of the thinkers who have considered the invention of a global human brain, there are as many who have said it is beneficial and what we need to survive as have said it is catastrophic and terrible, he says. Each step on the pathway - from the leap from single-celled organisms to multicellular creatures, from solitary hunters to large-scale cooperative groups - each step comes with the sacrifice of separate autonomy to a collective that is a more potent and complex whole. "So, this is just to assume that all this world brain stuff is feasible anyway - which it may not be; But if you think about it, that we are creating a far more complex planetary system and are far more coordinated globally, even if that hasn't led to peace ... if that's going to intensify, then of course something like a loss of autonomy will necessarily have to happen on the human individual." Humanity was destined to make predictions about our future, but the scope of our ability to foresee what could be ahead took time to develop, he says. "You go back to anywhere in the ancient world and no-one had quite yet noticed that the entire human future could be be drastically different to the past, and in unpredictable ways, in some sense simply because there just wasn't enough historic record yet. "So there wasn't the chronicle to look back and go 'oh the past was a foreign country', such that the future might become one too. "But also because the rate of change was so slow that within one lifetime you didn't really see so many things changing - that kind of rate of unprecedented change is only going to continue." Today's forms of art, cultural expression and media would have been almost incomprehensible to the ancients, Moynihan says. Photo: AFP "Now we step into the future with almost more of the opposite, I think. We now appreciate just how complicated everything is, and just how the smallest tiny inflection or perturbation can change the entire future in completely cascading ways. "It took Edward Lorenz in the 1960s to discover this by accident by messing around with weather simulations on his computer, to arrive at this fundamental insight from chaos theory - is that even in deterministic systems, very small changes to initial systems can leave to completely divergent futures. "And ... that metaphor of the branching paths - we now know that that applies profoundly at planetary level. If that can cultivate again that kind of sense of collective responsibility, then that would be a brilliant thing." Moynihan himself is hopeful the future can be more in line with proposals that have been made of a hopeful vision and cooperative steps forward. "And I do think that in the current era - you look at the people in charge and the ways that they act, and of course that seems like a completely idealistic thing. But then again, only 200 years ago the idea that universal suffrage was real, and that women would have the vote and that civil rights would be a thing that happened, and LGBT rights - those things would have all seemed impossible. "So I think we have to keep thinking that what seems impossible to us now can change overnight." Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store