logo
‘End of homo sapiens?' Horrifying warning from top Scottish AI expert

‘End of homo sapiens?' Horrifying warning from top Scottish AI expert

The break with the past will be so revolutionary, so all-encompassing, that it could alter the very nature of what it means to be human.
Talking to Professor Richard Susskind leaves your head spinning. You end the discussion with a sense that humanity is either cursed or blessed to live through this moment. Whatever the future holds, for good or ill, we're about to see the old world slide away and a new world come roaring into being in our lifetimes.
Scottish author, speaker, and independent adviser to international professional firms and national governments Richard Susskind OBE
The Glaswegian technologist is one of Britain's leading public intellectuals, and an adviser to governments and big business around the world on the impact AI has on society, commerce and humanity.
He is currently special envoy for justice and AI to the Commonwealth, and was president of the Society for Computers and Law, as well as technology adviser to the Lord Chief Justice of England and Wales. He holds professorships at Oxford and Strathclyde universities.
His new book, How To Think About AI, is an indispensable guide to understanding what artificial intelligence can – or will – do to us, our societies, our nations, and the world. Susskind's key point is that we have two choices: we either save the world 'with AI', or save the world 'from AI'.
If harnessed correctly AI could usher in a near utopia. However, if allowed to run out of control, and kept in the hands of a few super-rich tech barons, then dystopia might be too soft a term for what the future holds.
The clock is ticking. AI is moving at an incredible pace and unless citizens get to grips with what's coming we won't be equipped to demand that our politicians make the right choices. That, says Susskind, is why he's written this book: so ordinary people have the facts, figures and arguments required at their fingertips.
The Herald on Sunday caught up with Susskind at his Hertfordshire home. Casually dressed and brimming with enthusiasm for his subject, he explained that the biggest issue we need to get our heads around is the arrival of artificial general intelligence (AGI). In broad terms, that's AI which can understand and learn just like humans and perform any task we can.
Profound
Currently, the most commonly used AI is generative AI, the sort of technology found in ChatGPT where machines perform functions like creating text and images, pulling research together, or automating customer service, based on the data it has been fed.
The gulf between the two forms of AI is huge. Susskind believes that AGI may be a reality within the next five to 10 years. Its arrival would upend the world unless we're prepared.
AGI would mean 'machines could match human performance across all cognitive tasks. They'll be able to do anything we can do with our remarkable brains. Historically, that was regarded as science fiction'.
But not any more. 'The more I thought about this, the more I thought how ill-prepared we are as humanity for what would be the single most significant development in our history: that machines outperform us. The implications are so profound for you, me, our kids, government, warfare, everything.'
Among technology experts and developers, the idea that 'we should plan for AGI between 2030/35 is now pretty mainstream-thinking'. He says: 'I want to urge people to ask this question: what if AGI?'
Over the short-term – 'the next two or three years' – AI will mostly lead to 'automation', where we 'take out humans and plug in machines. That's why there's so much commercial interest in AI because if you're in government or business you can see
how to increase productivity and efficiency'.
The companies Susskind advises are all already using AI to 'summarise documents, draft emails, record meetings, produce action plans, create PowerPoints. There's no doubt they're enjoying efficiency gains'.
Clearly, even this relatively straightforward form of AI causes human redundancies. However, the long-term future won't simply be about 'automation, it will be one of innovation and elimination'.
Innovation is the upside. 'AI could allow us to do things that previously weren't possible.'
Think of AI analysing all the healthcare data in Britain, identifying everyone at risk of cancer, and then creating the drugs required to keep them alive – all without the need for human doctors or scientists.
However, that lack of human involvement leads inevitably to elimination. AGI, Susskind says, 'will bring about a state of affairs where we'll no longer have the need for human service'.
For example, 'the future of healthcare', he suggests, could be one of 'non-invasive therapies, where AI finds way of fixing people without cutting them open, and preventative medicine finding ways of anticipating people's medical difficulties'.
AI would, in this case, eliminate the need for surgeons. In medical terms it 'puts the fence at the top of the cliff rather than the ambulance at the bottom'. In other words, the very notion of healthcare would have to be rethought. The same is true for 'education, justice, the climate and so on', says Susskind. 'The common assumption is that technology will computerise what's already going on. I'm saying it's not going to end up that way. It's going to end up eliminating and fundamentally transforming the way we do much in life.'
Susskind cautions us not to think of AI 'on the same spectrum as social media. It's a different phenomenon altogether'. The earthquake social media caused would be a mere rumble compared to the possible future ushered in by AI.
READ MORE:
It's not a shot that's been fired across the SNP's bows, it's a cruise missile
'It's what the Nazis did to Warsaw' The dark story of Glasgow's rise and tragic fall
The rubbish the wine bar fakes like Farage talk about the working class makes me sick
I don't exaggerate, this Scottish translation deserves the Nobel Prize for Literature
Transform
AGI will make us question the very nature of work, and of humanity's purpose on Earth. For instance, if AGI does transform medicine so radically that GPs and even surgeons become redundant, how would we respond to that?
Susskind asks whether the intellectual 'starting point should be asking what this means for the recipients, the patients' rather than what it means for medics. 'It's not the purpose of law to give lawyers a living any more than it's the purpose of ill health to give doctors a living.
'Let's be blunt – if we can find cheaper, quicker, better, less intrusive, more convenient ways of solving problems, the market isn't going to show any loyalty to old ways of working.'
He says that, if AI finds a cure for cancer: 'I don't think people would say, 'well hang on, what about the oncologists'.' Candles and wheels still exist today, but we don't see candlemakers or wheelwrights on the high street, Susskind notes. With AI, 'we should expect the same across the professions'.
Societies will have to decide in which cases – if any – humans should remain in the driving seat, despite machines being able to accomplish tasks better. Susskind suggests the jury system may be one such field exempt from AI as we would still want to be judged by our peers, even though machines 'might be better at determining the facts of a dispute'.
We need to start considering 'what humanity would look like without the notion of work'. Susskind believes it's a very 'middle-class' view of the world to feel that work gives meaning to life. Millions of people endure work as drudgery and would be glad 'if they could flick a switch' and end labouring for low wages to make some CEO rich.
However, a world without work becomes a dystopia of poverty if those who are surplus to requirements can't earn money.
A future in which AGI replaces vast swathes of humanity in the workplace is filled with 'socio-economic and political risks'.
Technology is now 'concentrated in the hands of a very small number of non-state actors'. These giant corporations aren't subject to international law, or much regulation.
Susskind says we need to imagine a world where only 5% of today's workforce is still employed, yet where 'economic productivity is wildly enhanced by AI. The big question is: how would we ensure that the wealth created is distributed from those who own the technology to those who have been disenfranchised through technology'.
Susskind stresses he's 'not remotely Marxist', but these very issues were 'raised by Marx in his objection to capitalism'.
Some thinkers believe the arrival of AGI will usher in an era of 'techno-communism' where nobody works, and we can all be writers and artists, or play sports or do whatever fulfils us while AI looks after the world and provides us with all our needs.
But as Susskind says, the tech giants are 'under no obligation to redistribute', adding: 'I keep saying: we haven't thought through the fundamental political and social questions arising. We don't want to be discussing this at the last minute. We want to be discussing it now.'
Disruption
SUSSKIND refers to how anthropologists divide much of human history into a series of stages: 'the era of orality' when knowledge was shared through speech; 'the era of script' when we used laboriously handwritten books; and the 'era of print'. He adds: 'With each transformation you see fundamental changes in societal structure, particularly law, our economic models, and the rules we create.'
The era of AI will cause far more disruption 'so we should expect fundamental change to our political, economic, social and legal institutions'. Trying to adapt our current systems to what's coming is futile. It would be like Victorian engineers using Sumerian clay tablets written in cuneiform to design railways. 'If you ask the question 'what if AGI', you realise we need to fundamentally rethink society.'
To make matters worse, traditionally it's accepted that 'the law lags 10 years behind technology'. Yet in 10 years, we could have AGI. 'We need to snap into action.'
Consider just one facet of human life that's very important but seldom discussed: the company audit.
It's statutory and vital, says Susskind, as it ensures all is above board and finances are reported honestly. But in the era of AI, auditors already seem redundant. They take samples of company transactions to compile their annual reports. AI could look at all data, and not annually but constantly.
What happens to all those auditors? What happens to the giant companies which today do the audits – firms like Deloitte, PwC and KPMG?
'I'm trying to raise a flare,' says Susskind, 'and say we're institutionally ill-equipped to deal with this emerging technology. The mindset of most leaders is to say 'this is amazing, we can become more efficient'.
'What I'm saying is once you start to think about AGI you realise this isn't an efficiency play, this is going to completely shatter our conception of the labour market. It's going to require us to generate rules and regulations in days rather than years.
'My dilemma is that on one hand I see AI helping us solve some of humanity's greatest challenges like healthcare, education, the climate, and access to justice, but on the other hand I see this mountain range of risks and vulnerabilities.
'That's why I'm saying that balancing the benefits and threats – saving humanity both from AI and with AI – is the defining challenge of our era.'
He adds: 'As humans we must be able to hold two thoughts in our heads: that these systems are both potentially marvellous and potentially harmful. I can see both the horrors and the life-changing potential.
'We need to call up an army of our very best economists, lawyers and technologists to work on this. It's our Apollo programme.'
However, Susskind adds: 'The question is: is it too late? Are we sufficiently advanced in AI, without regulatory constraints, that we're destined to be sharing the planet with a greatly superior capability?'
No political leader is discussing 'technological unemployment and what this means for humanity. When I raise these issues, what I get from most leaders is 'I see what you're saying, Richard, but we've got enough to worry about'. It's the next person's problem, not theirs'.
Political and business leaders think too short-term, he says.
Risk
BUSINESS and governments should, evidently, explore AI's financial benefits, but in tandem they should also be future-proofing society against the risk. Indeed, companies must ask themselves 'do we have a sustainable business in five to 10 years'.
Again, this means we must get to grips with who ultimately owns AI. Unlike most previous breakthrough technologies, like nuclear power or telecommunications, which were 'initiated and funded within the state, AI originated in the private sector'. Susskind adds: 'That's novel. There has been greater investment in AI than in any single enabling technology in history.'
Trillions of dollars are going into research. That phenomenal investment alone should squash any notion that AI is all hype and won't progress any further than it has today, he believes.
Rather, we need to think about the AI systems that 'aren't yet invented' and the power they might unleash. 'My gut tells me that, by 2030, our lives will have been transformed by technologies that haven't yet been invented. What will be left for humans if machines are performing at such high levels? Can we have meaningful lives without work?'
Essentially, the coming of AGI will force us to ask 'what are humans really for'. In a world without work, religious people may find meaning in life through worship, but what of others?
And if we managed to somehow create a world where few of us work yet the financial spoils of AI were relatively fairly shared, then wouldn't we have effectively built a new slave society, a modern Ancient Rome?
If we did create a society built on AI 'slavery', then what might that do to us? Would we mistreat AIs? We're clearly on a path where robotics and AI collide, so would we abuse or mistreat AI automatons?
Technology often veers into the darkest recesses of human sexuality and crime. So what might this do to us morally? 'The mind boggles,' Susskind adds.
'We can imagine within 10 years little robots that are our companions, therapists, research assistants, joint authors, our pals. How should we feel about that?'
Humans show love to their pets. Would we love AIs? 'We haven't yet begun to think what human-machine relations will be.'
Susskind dismisses claims that AI is all hype and no risk as 'disingenuous and probably dangerous. It's like an asteroid [hitting Earth]. Surely we need to plan for it. You and I may reconvene 10 years from now and say we were worrying unnecessarily but I don't think that's the conversation we'll be having'.
Claims from academics that talking about the risks of AI is 'catastrophising' are simply 'technological myopia', he says. Such views look at AI from the perspective of its technological 'limitations today', not its 'future potential. It underestimates what's going to happen. Look at the scale of the investment, the trajectory of the breakthroughs'.
Race
FROM the 1950s onwards, computer breakthroughs came every five to 10 years, 'now it's every six to 12 months. Look at the market appetite in government and business'. The computing resources used to train AI are 'doubling every six months, that means in 10 years we'll see a one million fold increase'.
Sir Keir Starmer wants to turn Britain into an AI state and 'just let it rip'. America, China and Russia are in an AI arms race. 'There are tens of thousands of start-ups worldwide.'
We're now entering a period of 'accelerating returns', where AI 'will develop the next generation of AI', the technology getting better and better, faster and faster. 'Once these systems begin to self-develop, self-improve, self-propagate we're in a different universe. When I hear people say it's all overblown so pay AI no attention, I think it's irresponsible.'
Even now there are some aspects of AI that we simply cannot fully explain. AI could today, Susskind says, listen to the conversation he and I are having and then write a limerick about it. 'There's no way to explain how it did that. The interesting analogy is that in some ways we humans can't explain our own thoughts. But the point is that we don't really have scientific models yet to explain these incredibly high-performing systems.'
AI can 'give a better summary of a book than most humans. To say that's simply computational statistics – just ones and zeros – is like saying humans are just molecules. It's not a helpful explanation'.
It's routinely said that AI cannot be empathetic. However, it could learn to come up with a simulation of empathy which completely convinces humans. That's an AI psychotherapist.
AI may not be creative like a human novelist but it could come up with 'a new configuration of words which are meaningful and impactful' to humans. That's an AI artist. 'We can imagine robots running faster than Usain Bolt.'
Susskind's daughter sent him some music recently. He listened to it, and liked it. Then she told him it was AI-generated. 'In the future, AI just might be wildly better than us.'
He wonders if, in years to come, we'll seek out human creations or interactions the way we today might prefer to spend money on handmade furniture rather than mass-produced goods. 'We might feel the same about literature, art and music, but I'm not sure our grandchildren will.'
The coming of AI will shake humanity's sense of self to the foundations. No longer would we be the dominant intelligence on Earth. 'It will have a fundamental psychological effect on us and our perception of our position in the scheme of things. The idea of sharing the planet with entities more capable than us is deeply challenging.'
Susskind speculates that if we get on top of AI early enough we can confine it to a 'zombie' status, where it has 'no consciousness, will, or awareness' but is just 'phenomenally capable but non-sentient'.
He adds: 'Our perception of ourselves would be less diminished if we're simply sharing the planet with high-performing zombies.'
But if AI becomes self-aware and conscious 'we'd move down a division'. Even if AI just gives the impression of consciousness it would still leave 'this huge question mark hanging over us'.
Scottish author, speaker, and independent adviser to international professional firms and national governments Richard Susskind OBE has a new book out
Explosion
THE possibility exists that 'if we invent machines as intelligent as us then that will be our last invention'. The machine will become the inventor. This could lead to 'an intelligence explosion, where you go from AGI to a super-intelligence that's unfathomably more capable than us. When would this recursive self-improvement stop? That deeply concerns me'.
This super-intelligence hypothesis – 'the AI-evolution hypothesis' – raises profound cosmologically questions. AI which continually self-improves at an astonishing rate could find ways to invent space travel and then 'spread out across the cosmos, in due course replacing us'. We'd be but a footnote: the creature which invented the most powerful 'mind' in the universe.
'I find such ideas fascinating and terrifying,' Susskind adds. An alternative hypothesis is the singularity 'which says that organic biological humans and digital machines will converge, so the next generation will be digitally-enhanced humans'. The problem with that theory is this: 'If the machines are so much more capable than us, the contribution humans make to this merger will fade over time. That might eliminate us.'
Susskind's 'preoccupation' is that we will advance to AGI and 'that will lead to the super-intelligence hypothesis'. Among technologists, debate now rages over whether we should embrace the idea of super-advanced AI colonising the universe as 'our legacy', or if 'our obligation should be to preserve humanity'.
Susskind comes down on the side of preserving humanity. 'I just think of my family, my friends and the joy humans have, and I want this for more people. My hope is that AGI can improve the wellbeing, health and happiness of humanity rather than populate the cosmos.'
He's bewildered why society managed to have deep, intelligent debates in recent years about matters like genetic engineering but has failed to have a 'public conversation' about AI.
If we built a system designed to save the world 'with AI' then, Susskind believes, we could genuinely 'eliminate disease and ill health. That's deliverable'. Each child could have a personalised tutor. Pupils would 'have Aristotle in the afternoon, then art lessons with Michelangelo'.
With climate change, AI could 'develop and perfect new sources of power, ways of disposing of carbon – systems far more promising than we mere humans can put together'. AI could 'increase economic productivity' to a point that allowed us to effectively eliminate poverty. But, again, 'that requires the redistribution of the wealth gained by these systems away from the current providers and across the rest of humanity'.
Threat
THEN, of course, there are the consequences of failing to save the world 'from AI'. There are many 'existential and catastrophic threats: the weaponisation of this technology; the unintentional by-products; that it begins to perform in ways that are deeply damaging and not foreseeable. A powerful autonomous system over which we have little control presents major threats to humanity.
'The socio-economic threat is the biggest: what this does to the labour force, our conception of work, the idea that we have these phenomenally unaccountable powerful organisations which own these systems. Then there's the risk these systems just get things wrong. 'All the classic challenges that we've had since the dawn of civilisation come into sharp focus: how do we organise ourselves politically, what is a just distribution of resources, what is a happy, meaningful life?'
Susskind says: 'If we develop AGI – and this does remain an 'if' but not an unlikely 'if' – then in my view this would represent the most significant discontinuity in the history of humanity and society. A greater leap than fire, agriculture, print or industry, partly because AI will match or outperform our most prized and distinguishing feature – our intellect, our brains, our minds.'
He added that 'this revolution could well signal the end of pure homo sapiens, whether through the realisation of transhumanism – if/when we become digitally enhanced, perhaps as the next stage in our evolution – or as some cosmologists believe, we become extinct, in the very long run replaced by the unfathomably capable systems that we have invented.
'That is why I think the question 'what if AGI' is the most pressing and momentous question of our time. The future of humanity could be at stake.'
Mind-bending metaphysical questions are raised by the advances of AI. The technology can now create highly convincing virtual reality worlds. So AGI could eventually create worlds indistinguishable from reality. 'It genuinely leads to the Matrix question,' Susskind adds.
If a future AI super-intelligence could create a convincing virtual world, then that means today 'we can't be sure we're not in a virtual world'. In other words, we might already be a computer simulation in a digital universe created by AI.
With technology, it is usually 'people from the dark side' who become early adopters, at a time 'unconstrained by rules, regulations, ethics and qualms'. That's why governments should consider taking power away from corporations and developing state-controlled AI systems. 'That seems to me a very serious policy option,' he adds. It would be one way of ensuring a fairer distribution of AI profits.
Until a few years ago, Susskind was 'irreducibly optimistic about technology'. Today, he's both 'optimistic and pessimistic. AI could be channelled for massive human benefit, but the real risks are so profound that to not be fearful is irrational. That's my call to arms. The first thing we must do is understand what's going on'.
Susskind adds: 'I advise governments. I'm closely connected to governments. I speak to lots of ministers all around the world.'
But all he hears from those in power is 'how can we use ChatGPT, rather than any thinking about how, in 10 years, we're going to be in the biggest social crisis we've ever faced'.
'That's why I'm on this mission.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

MSP says Elon Musk should ‘forget Trump and bring SpaceX to Scotland'
MSP says Elon Musk should ‘forget Trump and bring SpaceX to Scotland'

Scotsman

timean hour ago

  • Scotsman

MSP says Elon Musk should ‘forget Trump and bring SpaceX to Scotland'

Calls are being made to create a bidding war for Scottish investment between the tech mogul and the US president. Sign up to our Politics newsletter Sign up Thank you for signing up! Did you know with a Digital Subscription to The Scotsman, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Learn More Sorry, there seem to be some issues. Please try again later. Submitting... Elon Musk should relocate his American business interests in Scotland after his public falling out with US President Donald Trump, an MSP has suggested. The pair have clashed in recent days over a bill which the tech mogul says will increase the US budget deficit - he has since become one of Mr Trump's fiercest critics. Advertisement Hide Ad Advertisement Hide Ad Ash Regan MSP says Scotland should 'be quick' to take advantage of this and lobby Mr Musk to relocate his business ventures to Scotland. Elon Musk at the White House. | Getty Images The Alba MSP, who previously called on the billionaire to open a Tesla Gigafactory in Scotland, says Scotland is an emerging force within the space and satellite industries, and branded Glasgow the 'satellite manufacturing capital of Europe'. Due to Mr Trump's family and business ties to Scotland, Ms Regan believes such a move by Mr Musk could 'prompt a bidding war between the president of the United States and one of the world's richest men as to who can invest more in Scotland'. Advertisement Hide Ad Advertisement Hide Ad This comes after Mr Trump threatened to cut US government contracts given to Mr Musk's SpaceX rocket company and his Starlink internet satellite services - in response, Mr Musk said SpaceX 'will begin decommissioning its Dragon spacecraft immediately'. He U-turned on this statement within hours, as SpaceX is the only US company capable of transporting crews to and from this space station using its four-person Dragon capsules. Cargo versions of this capsule are also used to ferry food and other supplies to the orbiting lab. Donald Trump and Elon Musk pictured at the White House in March (Picture: Roberto Schmidt) |Ms Regan said: 'The Scottish space industry, including satellite-related activities, is projected to be worth £4 billion to the Scottish economy by 2030. 'Glasgow is already known as the satellite manufacturing capital of Europe, and we are on the verge of becoming a global player in the industry. Advertisement Hide Ad Advertisement Hide Ad 'We have the sites, the people and the vision to match Elon Musk's aspirations for SpaceX so the Scottish Government should be opening the door and advertising Scotland as the go-to place if he wishes to relocate his business ventures here if contract cancellation threats in the US are upheld.' Scotland is currently developing multiple spaceports, including the Sutherland Spaceport in the Highlands and the SaxaVord Spaceport in Shetland - Ms Regan says both these sites are ready for Mr Musk to relocate his SpaceX operations to. The Alba MSP said other suitable sites include the proposed Spaceport 1 in the Outer Hebrides, the Macrihanish Spaceport Cluster and Prestwick Spaceport. She says Scotland cannot be left behind as passengers in this emerging industry, and added: 'I previously proposed Scotland as the site for the next Tesla Gigafactory and unfortunately Elon Musk ruled out investment due to the policies of the UK Labour government. Advertisement Hide Ad Advertisement Hide Ad Edinburgh Eastern MSP Ash Regan's Bill would criminalise the purchase of sexual acts. Picture: Getty Images. | Getty Images 'However, the Scottish Government has been a key partner in the growing success of our satellite industry, so in Scotland we would have a much better opportunity of attracting such investment where the UK Government has previously failed. 'Scotland has the potential for abundant renewable energy, which is needed to power emergent technologies. 'By creating innovative investment opportunities, we can then capitalise on Scotland's USP, ensuring we invest this bounty to benefit Scotland's businesses and communities. 'No more being left behind as passengers while Westminster squanders the power of our own resources.

Layoffs sweep America as AI leads job cut 'bloodbath'
Layoffs sweep America as AI leads job cut 'bloodbath'

Daily Mail​

timean hour ago

  • Daily Mail​

Layoffs sweep America as AI leads job cut 'bloodbath'

Elon Musk and hundreds of other tech mavens wrote an open letter two years ago warning AI would 'automate away all the jobs' and upend society. And it seems as if we should have listened to them. Layoffs are sweeping America, nixing thousands of roles at Microsoft, Walmart, and other titans, with the newly unemployed speaking of a'bloodbath' on the scale of the pandemic. This time it's not blue-collar and factory workers facing the ax - it's college grads with white-collar roles in tech, finance, law, and consulting. Entry-level jobs are vanishing the fastest, stoking fears of recession and a generation of disillusioned graduates left stranded with CVs no one wants. Graduates are now more likely to be unemployed than others, data has shown. Chatbots have already taken over data entry and customer service posts. Next-generation 'agentic' AI can solve problems, adapt, and work independently. These 'smartbots' are already spotting market trends, running logistics operations, writing legal contracts, and diagnosing patients. The markets have seen the future: AI investment funds are growing by as much as 60 per cent a year. 'The AI layoffs have begun, and they're not stopping,' says tech entrepreneur Alex Finn. Luddites who don't embrace the tech 'will be completely irrelevant in the next five years,' he posted on X. Procter & Gamble, which makes diapers, laundry detergent, and other household items, this week said it would cut 7,000 jobs, or about 15 per cent of non-manufacturing roles. Its two-year restructuring plan involves shedding managers who can be automated away. Microsoft last month announced a cull of 6,000 staff - about three per cent of its workforce - targeting managerial flab, after a smaller round of performance-related cuts in January. LA-based tech entrepreneur Jason Shafton said the software giant's layoffs spotlight a trend 'redefining' the job market. 'If AI saves each person 10 per cent of their time (and let's be real, it's probably more), what does that mean for a company of 200,000?' he wrote. Retail titan Walmart, America's biggest private employer, is slashing 1,500 tech, sales, and advertising jobs in a streamlining effort. Citigroup, cybersecurity firm CrowdStrike, Disney, online education firm Chegg, Amazon, and Warner Bros. Discovery have culled dozens or even hundreds of their workers in recent weeks. Musk himself led a federal sacking spree during his 130-day stint at the Department of Government Efficiency, which ended on May 30. Federal agencies lost some 135,000 to firings and voluntary resignation under his watch, and 150,000 more roles are set to be mothballed. Employers had already announced 220,000 job cuts by the end of February, the highest layoff rate seen since 2009. In announcing cuts, executives often talk about restructuring and tough economic headwinds. Many are spooked by President Donald Trump's on-and-off tariffs, which sent stock markets into free-fall and prompted CEOs to second-guess their long-term plans. Others say something deeper is happening, as companies embrace the next-generation models of chatbots and AI. Robots and machines have for decades usurped factory workers. AI chatbots have more recently replaced routine, repetitive, data entry, and customer service roles. A new and more sophisticated technology - called Agentic AI - now operates more independently: perceiving the environment, setting goals, making plans, and executing them. AI-powered software now writes reports, analyzes spreadsheets, creates legal contracts, designs logos, and even drafts press releases, all in seconds. Banks are axing graduate recruitment schemes. Law firms are replacing paralegals with AI-driven tools. Even tech startups, the birthplace of innovation, are swapping junior developers for code-writing bots. Managers increasingly seek to become 'AI first' and test whether tasks can be done by AI before hiring a human. That's now company policy at Shopify and is how fintech firm Klarna shrank its headcount by 40 per cent, CEO Sebastian Siemiatkowski told CNBC last month. Experienced workers are encouraged to automate tasks and get more work done; recent graduates are struggling to get their foot in the door. From a distance, the job market looks relatively buoyant, with unemployment holding steady at 4.2 per cent for the third consecutive month, the Labor Department reported on Friday. But it's unusually high - close to 6 per cent - among recent graduates. The Federal Reserve Bank of New York recently said job prospects for these workers had 'deteriorated noticeably'. That spells trouble not just for young workers, but for the long-term health of businesses - and the economy. Economists warn of an AI-induced downturn, as millions lose jobs, spending plummets, and social unrest festers. It's been dubbed an industrial revolution for the modern era, but one that's measured in years, not decades. Dario Amodei, CEO of Anthropic, one of the world's most powerful AI firms, says we're at the start of a storm. AI could wipe out half of all entry-level white-collar jobs - and spike unemployment to 10-20 per cent in the next one to five years, he told Axios. Lawmakers have their heads in the sand and must stop 'sugar-coating' the grim reality of the late 2020s, Amodei said. 'Most of them are unaware that this is about to happen,' he said. 'It sounds crazy, and people just don't believe it.' Frustrations: Sacked workers have taken to social media to vent their frustrations about the new tech crunch Young people who've been culled are taking to social media to vent their anger as the door to a middle-class lifestyle closes on them. Patrick Lyons calls it 'jarring and unexpected' how he lost his Austin-based program managing job in an 'emotionless business decision' by Microsoft. 'There's nothing the 6,000 of us could have done to prevent this,' he posted. A young woman coder, known by her TikTok handle dotisinfluencing, posts a daily video diary about the 'f***ing massacre' of layoffs at her tech company as 'AI is taking over'. Her job search is going badly. She claims one recruiter appeared more interested in taking her out for drinks than offering a paycheck. 'I feel like s***,' she added. Ben Wolfson, a young Meta software engineer, says entry-level software jobs dried up in 2023. 'Big tech doesn't want you, bro,' he said. Critics say universities are churning out graduates into a market that simply doesn't need them. A growing number of young professionals say they feel betrayed - promised opportunity, but handed a future of 'AI-enhanced' redundancy. Others are eyeing an opportunity for a payout to try something different. Donald King posted a recording of the meeting in which he was unceremoniously laid off from his data science job at consulting firm PwC. 'RIP my AI factory job,' he said. 'I built the thing that destroyed me.' He now posts from Porto, in Portugal - a popular spot for digital nomads - where he's founded a marketing startup. Industry insiders say it won't be long before another generation of AI arrives to automate new sectors. As AI improves, the difference between 'safe' and 'automatable' work gets blurrier by the day. Human workers are advised to stay one step ahead and build AI into their own jobs to increase productivity. Optimists point to such careers as radiology - where humans initially looked set to be outmoded by machines that could speedily read medical scans and pinpoint tumors. But the layoffs didn't happen. The technology has been adopted - but radiologists adapted, using AI to sharpen images and automate some tasks, and boost productivity. Some radiology units even expanded their increasingly efficient human workforce. Others say AI is a scapegoat for 2025's job cuts - that executives are downsizing for economic reasons, and blaming technology so as not to panic shareholders. But for those who have lost their jobs, the future looks bleak.

The AI Risk Equation: Delay vs Safety – Calculating the True Cost: By Erica Andersen
The AI Risk Equation: Delay vs Safety – Calculating the True Cost: By Erica Andersen

Finextra

timean hour ago

  • Finextra

The AI Risk Equation: Delay vs Safety – Calculating the True Cost: By Erica Andersen

In the race to adopt artificial intelligence, too many enterprises are flooring the brakes while neglecting the accelerator. As the saying goes, "AI may not be coming for your job, but a company using AI is coming for your company." The pressure to integrate AI solutions is becoming intense, and organizations that have missed early adoption windows are increasingly turning to external vendors for quick fixes. The longer enterprises wait, the faster and riskier it becomes when they are forced to adopt AI. By delaying, they have to learn fast how to do it with no experience under their belt. This article explores the significant risks of unchecked AI deployment and offers guidance for navigating the challenges. When AI Tools Go Rogue Remember the UK Post Office Horizon scandal? A conventional software system led to hundreds of innocent people being prosecuted, some imprisoned, and lives utterly destroyed. That was just normal software. The AI tools your organization might be preparing to unleash represent an entirely different beast. AI is like an adolescent—moody, unpredictable, and occasionally dangerous. Consider Air Canada's chatbot debacle: it confidently provided customers with incorrect bereavement policy information, and the courts ruled that Air Canada had to honor what their digital representative had erroneously promised. While in this case one might argue the chatbot was more humane than the company's actual policies, the financial implications were significant. The critical question is: will your AI tool be trusted to behave and do its job, or will it go on a rampage and wreck your business? Learning how to deploy AI with robust oversight is a critical skill organizations must master for successful AI deployments, and not to play Russian roulette. Companies starting now, are getting a significant edge in learning how to control this critical technology. The Zillow Cautionary Tale Zillow's failed foray into real estate flipping highlights the dangers of AI relying solely on past data. The algorithm, confident in its predictions, failed to account for rapidly changing market conditions, such as a drop in demand or nearby property issues—it could take months for Zillow's algorithm to recognize the impact on valuation. Meanwhile, savvy sellers capitalized on this, unloading properties to Zillow before Zillow detected the prices plummeting, costing the company 10% of its workforce. The problem? Zillow's AI was backward-looking, trained on historical data, and unable to adapt to dynamic environments. This same issue plagues stock-picking algorithms and other systems. that perform beautifully on historical data but collapse when faced with new market conditions. If your AI is making decisions based solely on past data without accounting for environmental changes, you're setting yourself up for a Zillow-style catastrophe . To mitigate this risk, ensure your AI's training data represents current and anticipated future conditions. Consider the risks carefully! This is particularly crucial for financial systems, where tail risks are more frequent than models predict. Medical applications, like analyzing skin conditions, are much less susceptible to changing environments, as long as the AI is trained on a representative sample of the population. Startup Corner-Cutting: From Unicorns to Bankruptcy Your vendor might be cutting corners. While they may not be another Theranos, the risk is real. Take the UK tech unicorn that recently collapsed into bankruptcy amid financial reporting discrepancies. It has now emerged that was a fraud, and people using the service are left with orphaned applications. Startups face intense pressure to deliver results, which can lead to critical oversights with inconvenient truths often getting swept under the rug. One common pitfall is bias in training data. When your system makes judgments about people, inherent biases can lead to discriminatory outcomes, and can even perpetuate and amplify discriminatory outcomes. Even tech giants aren't immune. Amazon attempted to build an AI resume screening tool to identify top talent by analyzing their current workforce's resumes. The problem? AWS, their massive cloud division, was predominantly male, so the AI learned to favor male candidates. Even after purging overtly gender-identifying information, the system still detected subtle language patterns more common in men's resumes and continued its bias. If you're using AI to determine whether someone qualifies for financing, how can you be sure the system isn't perpetuating existing biases? My advice, before deploying AI that makes decisions about people, carefully evaluate the data and the potential for bias. Consider implementing bias detection and mitigation techniques. Better yet, start now with an internal trial to see the problems that bias in the data might cause. Those organizations getting hands on experience right now, will be well ahead of their peers who have not started. The Hallucination Problem Then there are "hallucinations" in generative AI—a polite term for making things up, which is exactly what's happening. Just ask Elon Musk, whose chatbot Grok fabricated a story about NBA star Klay Thompson throwing bricks through windows in Sacramento. Sacramento might be bland, but it did not drive Klay to throw bricks through his neighbor's windows. Such fabrications are potentially damaging to reputations, including your company's. How can you prevent similar embarrassments? Keep humans in the decision loop—at minimum, you'll have someone to blame when things go wrong. It wasn't the AI you purchased from "Piranha AI backed by Shady VC" that approved those questionable loans; it was Johnny from accounting who signed off on them. A practical approach is designing your AI to show its work. When the system generates outputs by writing code to extract database information, this transparency, or "explainable AI", approach allows you to verify the results and logic used to arrive at them. There are other techniques that can reduce or eliminate the effect of hallucinations, but you need to get some hands-on experience to understand when they occur, what they say, and what risk this exposes your organization to. The Economic and Societal Costs of AI Failures The costs of AI security and compliance failures extend far beyond immediate losses: Direct Financial Costs: AI security breaches can lead to significant financial losses through theft, ransom payments, and operational disruption. The average cost of a data breach reached $4.45 million in 2023, with AI-enhanced attacks potentially driving this figure higher. Regulatory Penalties: Non-compliant AI systems increasingly face steep regulatory penalties. Under GDPR, companies can be fined up to 4% of annual global revenue. Reputational Damage: When AI systems make discriminatory decisions or privacy violations occur, the reputational damage can far exceed direct financial losses and persist for years. Market Confidence Erosion: Systematic AI failures across an industry can erode market confidence, potentially triggering investment pullbacks and valuation corrections. Societal Trust Decline: Each high-profile AI failure diminishes public trust in technology and institutions, making future innovation adoption more difficult. The Path Forward As you enter this dangerous world, you face a difficult reality: do you delay implementing AI, and then have to scramble to catch up, or are you more cautious and start working on AI projects now. The reality is that your competitors are likely adopting AI, and you must as well in the not-so-distant future. Some late starters will implement laughably ridiculous systems that cripple their operations. Don't assume that purchasing from established vendors guarantees protection—many products assume you will manage the risks. Trying to run a major AI project with no experience is like trying to drive a car with no training. Close calls are the best you can hope for. The winners will be companies that carefully select the best AI systems while implementing robust safeguards. Don't assume established vendors are immune to the risks. Consider the following steps: Prioritize Human Oversight: Implement robust human review processes for AI outputs. Implement robust human review processes for AI outputs. Focus on Data Quality: Ensure your training data is accurate, representative, and accounts for potential biases. Ensure your training data is accurate, representative, and accounts for potential biases. Demand Explainability: Choose AI systems that provide transparency into their decision-making processes. Choose AI systems that provide transparency into their decision-making processes. Establish Ethical Guidelines: Develop clear ethical guidelines for AI development and deployment. Alternatively, an AI consultancy can provide guidance. However, vet them carefully or you might end up with another problem rather than a solution. Develop clear ethical guidelines for AI development and deployment. Alternatively, an AI consultancy can provide guidance. However, vet them carefully or you might end up with another problem rather than a solution. Apply Proper Security and Compliance Measures: This isn't just good ethics—it's good business. In the race to AI adoption, remember: it's better to arrive safely than to crash spectacularly before reaching the finish line. Those who have already started their AI journey are learning valuable lessons about what works and what doesn't. The longer you wait, the more risky your position becomes. For everyone else, all you can hope for is more empty chambers in your Russian roulette revolver. Written by Oliver King-Smith, CEO of smartR AI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store