logo
Today's AI can crack second world war Enigma code ‘in short order', experts say

Today's AI can crack second world war Enigma code ‘in short order', experts say

Irish Examiner07-05-2025

The Enigma code was a fiendish cipher that took Alan Turing and his fellow codebreakers a herculean effort to crack. Yet experts say it would have crumbled in the face of modern computing.
While Polish experts broke early versions of the Enigma code in the 1930s and built anti-Enigma machines, subsequent security upgrades by the Germans meant Turing had to develop new machines, or 'Bombes', to help his team of codebreakers decipher enemy messages. By 1943, the machines could decipher two messages every minute.
Yet while the race to break the Enigma code has become famous, credited with shortening the second world war by up to two years, and spawning various Hollywood films, experts say cracking it would be a trivial matter today.
'Enigma wouldn't stand up to modern computing and statistics,' said Michael Wooldridge, a professor of computer science and an expert in artificial intelligence (AI) at the University of Oxford.
The Enigma device used by the Axis powers was an electro-mechanical machine that resembled a typewriter, with three rotors that each had 26 possible positions, a reflector that sent the signal back through the rotors and a plugboard that swapped pairs of letters.
Its set-up meant that even if the same key was pressed twice, a different letter would be produced each time. What's more, the initial settings were changed every 24 hours.
'Essentially the enigma devices got their power because the number of possible ways in which a message could be encrypted was astronomically large. Far, far too large for a human to exhaustively check,' Wooldridge said, adding that the 'bombes' were crude hardwired mechanical computers, searching through enormous numbers of possible alternatives to decrypt Nazi messages.
Dr Mustafa A Mustafa, a senior lecturer in software security at the University of Manchester, added that the key to the success of Turing and his colleagues was that Enigma had a number of weaknesses, including that no letter would be represented as itself once enciphered.
'It was [a] brute force attack, trying all different combinations out. But with these weaknesses of the Enigma, they managed to do that. They managed to automate this to do it fast enough to be able to crack the code,' he said.
Today, however, the process would be far less arduous, not least because of a technology Turing himself pioneered: AI.
'It would be straightforward to recreate the logic of bombes in a conventional program,' Wooldridge said, noting the AI model ChatGPT was able to do so.
'Then with the speed of modern computers, the laborious work of the bombes would be done in very short order.'
Wooldridge added that a range of modern statistical and computational techniques could also be deployed.
'And the power of modern datacentres is hard to imagine,' he said, noting modern computing power would have astounded Turing.
'Enigma would not remotely be a match for these,' he said.
Using a slightly different approach – that Wooldridge suggested might be slower – researchers have previously used an AI system trained to recognise German using Grimm's fairytales, together with 2,000 virtual servers, to crack a coded message in 13 minutes.
But while modern computing would have rapidly defanged Enigma, techniques such as the Rivest-Shamir-Adleman (RSA) cipher – a system initially developed in 1977 and based on large prime numbers – remain robust.
'In the case of RSA, it's the problem of factoring very large numbers. Brute force techniques – looking through all the alternatives – just won't work on these problems,' said Wooldridge, although he noted such techniques might not hold up against future developments.
'If quantum computers ever deliver their theoretical promise, then we may need completely new techniques to keep our data safe,' he said.
But while the Enigma code would not stand up long to modern technology, Mustafa said cracking it during the war was a huge achievement, not least as it was considered unbreakable.
'To be able to crack it – it took them months, more than a year – but to be able actually to do this within the lifetime of the war, it was a huge thing,' he said.
'God knows what would have happened if we hadn't cracked Enigma in time.'
Read More
Fighter jet landing on US aircraft carrier goes overboard

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Computer scientist to develop 'honest' AI that will spot rogue systems and flag 'harmful behaviour'
Computer scientist to develop 'honest' AI that will spot rogue systems and flag 'harmful behaviour'

Irish Examiner

time6 days ago

  • Irish Examiner

Computer scientist to develop 'honest' AI that will spot rogue systems and flag 'harmful behaviour'

An artificial intelligence pioneer has launched a non-profit dedicated to developing an 'honest' AI that will spot rogue systems attempting to deceive humans. Yoshua Bengio, a renowned computer scientist described as one of the 'godfathers' of AI, will be president of LawZero, an organisation committed to the safe design of the cutting-edge technology that has sparked a $1tn (€877bn) arms race. Starting with funding of about $30m (€26.3m) and more than a dozen researchers, Bengio is developing a system called Scientist AI that will act as a guardrail against AI agents — which carry out tasks without human intervention — showing deceptive or self-preserving behaviour, such as trying to avoid being turned off. Describing the current suite of AI agents as 'actors' seeking to imitate humans and please users, he said the Scientist AI system would be more like a 'psychologist' that can understand and predict bad behaviour. 'We want to build AIs that will be honest and not deceptive,' Bengio said. 'It is theoretically possible to imagine machines that have no self, no goal for themselves, that are just pure knowledge machines — like a scientist who knows a lot of stuff.' However, unlike current generative AI tools, Bengio's system will not give definitive answers and will instead give probabilities for whether an answer is correct. 'It has a sense of humility that it isn't sure about the answer,' he said. Deployed alongside an AI agent, Bengio's model would flag potentially harmful behaviour by an autonomous system — having gauged the probability of its actions causing harm. Scientist AI will 'predict the probability that an agent's actions will lead to harm' and, if that probability is above a certain threshold, that agent's proposed action will then be blocked. LawZero's initial backers include AI safety body the Future of Life Institute, Jaan Tallinn, a founding engineer of Skype, and Schmidt Sciences, a research body founded by former Google chief executive Eric Schmidt. Bengio said the first step for LawZero would be demonstrating the methodology behind the concept works — and then persuading companies or governments to support larger, more powerful versions. Open-source AI models, which are freely available to deploy and adapt, would be the starting point for training LawZero's systems, Bengio added. 'The point is to demonstrate the methodology so that then we can convince either donors or governments or AI labs to put the resources that are needed to train this at the same scale as the current frontier AIs. It is really important that the guardrail AI be at least as smart as the AI agent that it is trying to monitor and control,' he said. Bengio, a professor at the University of Montreal, earned the 'godfather' moniker after sharing the 2018 Turing award — seen as the equivalent of a Nobel prize for computing — with Geoffrey Hinton, himself a subsequent Nobel winner, and Yann LeCun, the chief AI scientist at Mark Zuckerberg's Meta. A leading voice on AI safety, he chaired the recent International AI Safety report, which warned autonomous agents could cause 'severe' disruption if they become 'capable of completing longer sequences of tasks without human supervision'. The Guardian Read More The real cost of slightly funnier AI is the health of a poor black community

‘AI did my school project. It took me an hour rather than a week'
‘AI did my school project. It took me an hour rather than a week'

Irish Independent

time31-05-2025

  • Irish Independent

‘AI did my school project. It took me an hour rather than a week'

Many EU schools are introducing ChatGPT in the classroom, but is it a lazy way of fact-finding or writing essays? One student I spoke to doesn't think so Today at 21:30 Should Ireland follow other EU countries in rolling out ChatGPT in secondary school classrooms? That may be on the cards, according to reports this week. On one level, the idea seems odd. Doesn't generative AI fly against the principles of basic, intuitive learning? Isn't it just a lazy, decadent way of outsourcing essays and fact-finding?

Colm O'Regan: AI scraped my novels, but why? Does it want to describe the inner workings of an imaginary Tidy Towns group?
Colm O'Regan: AI scraped my novels, but why? Does it want to describe the inner workings of an imaginary Tidy Towns group?

Irish Examiner

time23-05-2025

  • Irish Examiner

Colm O'Regan: AI scraped my novels, but why? Does it want to describe the inner workings of an imaginary Tidy Towns group?

Artificial intelligence — Lookit, you can't avoid it. Here are a few words to get you through if you are ever stopped by the AI police. AI, machine learning, neural networks, and large language models. I've used these interchangeably like bog/marsh/mire/fen/swamp. But they're not all the same thing. Artificial Intelligence is the overall idea of a computer system that can simulate human intelligence. This means it can learn new things. Machine learning is a type of AI where computers learn from data. A neural network is a technique of machine learning that mimics how the brain learns. That's the stuff that does image recognition so that's why your weird photo was rejected from Facebook. And a large language model is a type of neural network trained on squillions of words and grammar rules, and data from the internet so that it can answer you like a smart human can. ChatGPT is an example of a large language model. That's the thing you used to write your CV. LLMs have a rake of stuff in their brains that they're using to write answers for you. This is the training data. A lot of this has been scraped from the internet but also books, art, etc. Including my own books. Finding my novels scraped by AI, at first I couldn't possibly figure out what use it would have to build up the large language models' ability to describe the inner workings of an imaginary Tidy Towns group. But now, looking at the infiltration of Tidy Towns Facebook groups with AI-generated profiles, the whole thing is starting to fit together. Just like all of us, AIs can have bias. And just like all of us, our bias depends on what we've been told. That bias can lead us to say some stupid stuff. AI systems reflect the prejudices in their training data. If it was trained on biased data, it can act in biased ways. Shite in, shite out. Soon you'll hear about AGI, artificial general intelligence. This is where a machine is smart enough to match or be better than humans across all types of intelligence. We're not there yet, but already it is said that the next version of AI will be smart enough to do a PhD. The joke doing the rounds is that the version after that will be smart enough to turn down a PhD and maybe go working because its head is melted from research. Speaking of education, there's a huge concern in college about using AI to cheat. Academics are at a loss to know how to detect it. The only answer I think is to increase the level of in-person practical assessment. Maybe take a tip from the Fianna. There was no essay to be written for joining them. Jumping over a branch as tall as yourself, running under a knee-high stick, and plucking a thorn from your foot as you run at top speed was considered the entrance exam. If you hear the 'Turing test' thrown around, it's a test to see if a machine can fool a human into thinking it's also human. AIs recently passed a very hard version of it. 'Very hard' is a technical term. I will not be taking follow-up questions. You'll see talk about regulation and 'guardrails'. Guardrails are the rules built into AI to stop it from being harmful, but you'll probably need another AI to enforce it and … shur they know each other as they went to school together. It'll be the Celtic Tiger banks all over again. Finally there's hype — fierce excitement about what's coming, given by people who have a stake in you being as concerned about this as possible. For more on that, why not sign up to my fun AI safety course called AAAAA!-I. Cash only. Colm is not hosting an AI course. Yet. He'll be doing comedy at Coughlans on Sunday, May 25, until an AI replacement can be found.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store