logo
How Supercomputing Will Evolve, According to Jack Dongarra

How Supercomputing Will Evolve, According to Jack Dongarra

WIRED2 days ago
Aug 5, 2025 5:00 AM WIRED talked with one of the most influential voices in computer science about the potential for AI and quantum to supercharge supercomputers. Jack Dongarra in Lindau in July 2025. Photograph: Patrick Kunkel/Lindau Nobel Laureate Meetings
High-performance supercomputing—once the exclusive domain of scientific research—is now a strategic resource for training increasingly complex artificial intelligence models. This convergence of AI and HPC is redefining not only these technologies, but also the ways in which knowledge is produced, and takes a strategic position in the global landscape.
To discuss how HPC is evolving, in July WIRED caught up with Jack Dongarra, a US computer scientist who has been a key contributor to the development of HPC software over the past four decades—so much so that in 2021 he earned the prestigious Turing Award. The meeting took place at the 74th Nobel Laureate Meeting in Lindau, Germany, which brought together dozens of Nobel laureates as well as more than 600 emerging scientists from around the world.
This interview has been edited for length and clarity. Jack Dongarra on stage at the 74th Lindau Nobel Laureate Meetings. Photograph: Patrick Kunkel/Lindau Nobel Laureate Meetings
WIRED: What will be the role of artificial intelligence and quantum computing in scientific and technological development in the coming years?
Jack Dongarra: I would say AI is already playing an important role in how science is done: We're using AI in many ways to help with scientific discovery. It's being used in terms of computing and helping us to approximate how things behave. So I think of AI as a way to get an approximation, and then maybe refine the approximation with the traditional techniques.
Today we have traditional techniques for modeling and simulation, and those are run on computers. If you have a very demanding problem, then you would turn to a supercomputer to understand how to compute the solution. AI is going to make that faster, better, more efficient.
AI is also going to have an impact beyond science—it's going to be more important than the internet was when it arrived. It's going to be so pervasive in what we do. It's going to be used in so many ways that we haven't really discovered today. It's going to serve more of a purpose than the internet has played in the past 15, 20 years.
Quantum computing is interesting. It's really a wonderful area for research, but my feeling is we have a long way to go. Today we have examples of quantum computers—hardware always arrives before software—but those examples are very primitive. With a digital computer, we think of doing a computation and getting an answer. The quantum computer is instead going to give us a probability distribution of where the answer is, and you're going to make a number of, we'll call it runs on the quantum computer, and it'll give you a number of potential solutions to the problem, but it's not going to give you the answer. So it's going to be different.
With quantum computing, are we caught in a moment of hype?
I think unfortunately it's been oversold—there's too much hype associated with quantum. The result of that typically is that people will get all excited about it, and then it doesn't live up to any of the promises that were made, and then the excitement will collapse.
We've seen this before: AI has gone through that cycle and has recovered. And now today AI is a real thing. People use it, it's productive, and it's going to serve a purpose for all of us in a very substantial way. I think quantum has to go through that winter, where people will be discouraged by it, they'll ignore it, and then there'll be some bright people who figure out how to use it and how to make it so that it is more competitive with traditional things.
There are many issues that have to be worked out. Quantum computers are very easy to disturb. They're going to have a lot of 'faults'—they will break down because of the nature of how fragile the computation is. Until we can make things more resistant to those failures, it's not going to do quite the job that we hope that it can do. I don't think we'll ever have a laptop that's a quantum laptop. I may be wrong, but certainly I don't think it'll happen in my lifetime.
Quantum computers also need quantum algorithms, and today we have very few algorithms that can effectively be run on a quantum computer. So quantum computing is at its infancy, and along with that the infrastructure that will use the quantum computer. So quantum algorithms, quantum software, the techniques that we have, all of those are very primitive.
When can we expect—if ever—the transition from traditional to quantum systems?
So today we have many supercomputing centers around the world, and they have very powerful computers. Those are digital computers. Sometimes the digital computer gets augmented with something to enhance performance—an accelerator. Today those accelerators are GPUs, graphics processing units. The GPU does something very well, and it just does that thing well, it's been architected to do that. In the old days, that was important for graphics; today we're refactoring that so that we can use a GPU to satisfy some of the computational needs that we have.
In the future, I think that we will augment the CPU and the GPU with other devices. Perhaps quantum would be another device that we would add to that. Maybe it would be neuromorphic—computing that sort of imitates how our brain works. And then we have optical computers. So think of shining light and having that light interfere, and the interference basically is the computation you want it to do. Think of an optical computer that takes two beams of light, and in the light is encoded numbers, and when they interact in this computing device, it produces an output, which is the multiplication of those numbers. And that happens at the speed of light. So that's incredibly fast. So that's a device that perhaps could fit into this CPU, GPU, quantum, neuromorphic computer device. Those are all things that perhaps could combine.
How is the current geopolitical competition—between China, the United States, and beyond—affecting the development and sharing of technology?
The US is restricting computing at a certain level from going to China. Certain parts from Nvidia are no longer allowed to be sold there, for example. But they're sold to areas around China, and when I go visit Chinese colleagues and look at what they have in their their computers, they have a lot of Nvidia stuff. So there's an unofficial pathway.
At the same time, China has pivoted from buying Western technology to investing in its own technology, putting more funding into the research necessary to advance it. Perhaps this restriction that's been imposed has backfired by causing China to accelerate the development of parts that they can control very much more than they could otherwise.
The Chinese have also decided that information about their supercomputers should not be advertised. We do know about them—what they look like, and what their potential is, and what they've done—but there's no metric that allows us to benchmark and compare in a very controlled way how those computers compare against the machines that we have. They have very powerful machines that are probably equal to the power of the most significant machines that we have in the US.
They're built on technology that was invented or designed in China. They've designed their own chips. They compete with the chips that we have in the computers that are in the West. And the question that people ask is: Where were the chips fabricated? Most chips used in the West are fabricated by the Taiwan Semiconductor Manufacturing Company. China has technology, which is a generation or two behind the technology that TSMC has, but they're going to catch up.
My guess is that some of the Chinese chips are also fabricated in Taiwan. When I ask my Chinese friends 'Where were your chips manufactured?' they say China. And if I push them and say 'Well, were they manufactured in Taiwan?' the answer to that comes back eventually is Taiwan is part of China.
Jack Dongarra on the shores of Lake Constance at the 74th Nobel Laureate Meeting. Photograph: Gianluca Dotti/Wired
How will the role of programmers and developers change as AI evolves? Will we get to write software using only natural language?
AI has a very important role I think in helping to take away some of the time-consuming parts of developing programs. It's gotten all the information about everybody else's programs that's available and then it synthesizes that and then can push that forward. I've been very impressed when I have asked some of these systems to write a piece of software to do a certain task; the AI does a pretty good job. And then I can refine that with another prompt, saying 'Optimize this for this kind of computer,' and it does a pretty good job of that. In the future, I think more and more we will be using language to describe a story to AI, and then have it write a program to carry out that function.
Now of course, there are limits—and we have to be careful about hallucinations or something giving us the wrong results. But maybe we can build in some checks to verify the solutions that AI produces and we can use that as a way of measuring the potential accuracy of that solution. We should be aware of the potential problems, but I think we have to move ahead in this front.
This story originally appeared on WIRED Italia and has been translated from Italian.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Macronix Introduces Cutting-Edge Secure-Boot NOR Flash Memory
Macronix Introduces Cutting-Edge Secure-Boot NOR Flash Memory

Yahoo

timea minute ago

  • Yahoo

Macronix Introduces Cutting-Edge Secure-Boot NOR Flash Memory

ArmorBoot MX76 Features Ultra-Fast Boot Times, High Performance, Integrated Authentication Protection, Data Integrity Verification, Secure Update, SPI Interface, Low-Power Options HSINCHU, Aug. 6, 2025 /PRNewswire/ -- Macronix International Co., Ltd. (TSE: 2337), a leading integrated device manufacturer in the non-volatile memory (NVM) market, today announced ArmorBoot MX76, a robust NOR flash memory combining in a single device, the essential performance and an array of security features that deliver rapid boot times and iron-clad data protection. The newest member of Macronix's highly successful and widely deployed ArmorFlash™ family, ArmorBoot MX76 is a high-performance, highly secure flash memory designed for artificial intelligence (AI), Internet of Things (IoT), automotive electronics and other demanding applications that require ultra-fast boot times, integrated authentication protection, data integrity verification, secure flash updates, a Serial Peripheral Interface (SPI), power options of 3V or 1.8V, and capacities up to 1 Gb. "We developed ArmorBoot MX76 to address specific challenges designers in the AI, IoT and automotive-electronics markets face: achieving fast boot times, enhanced security and seamless updates," said F.L. Ni, vice president of marketing at Macronix International. "Those, along with other rapidly growing markets – industrial automation, healthcare and wireless communications, for example -- represent broad spectrum of system and device manufacturers demanding the greater performance and heightened security that ArmorBoot MX76 provides. In an age when cybersecurity is no longer a luxury but a necessity, and boot times are increasingly consequential to products' success, Macronix is delivering the highest level of security and performance in flash memory." ArmorBoot MX76's unique architecture sets it apart from previous secure NOR flash solutions in that it combines in-demand performance and security functions in a single device; no additional chips are needed to ensure required levels of data protection. During a system's critical startup or update phase, the memory's performance and security features kick in automatically and seamlessly. In addition to being a single-device solution for ultra-fast booting and robust security, ArmorBoot MX76's SPI interface enables smooth integration into new or existing systems. Top worldwide manufacturers who've already designed ArmorFlash memories into their AI, IoT and automotive-focused solutions are presently evaluating ArmorBoot to expand their product portfolios to offer enhanced performance and secure-boot capabilities. ArmorBoot's Key Features Support for the secure boot process without exposing code on the data bus Device authentication Unmatched array of security features that ensure data protection SPI interface for seamless design integration Ideal flash solution for AI, IoT, automotive, healthcare, communications, industrial systems 3V or 1.8V for optimal power efficiency Capacities up to 1Gb – suitable for targeted applications Broad Array of Secure NOR Flash Macronix's award-winning ArmorFlash family, of which ArmorBoot MX76 is the latest addition, features a broad array of NOR flash memories developed for a wide range of applications demanding data security. It features security schemes such as Physical Unclonable Function, or PUF, and unique ID, with authenticated and encrypted links for NOR, NAND or flash. ArmorFlash has achieved several safety and security certifications, including from standards bodies focused on automotive electronics and cybersecurity. For more information and specifications on ArmorFlash and ArmorBoot, go to Availability ArmorBoot MX76 is sampling now, with production scheduled for later in 2025. Showcased at FMS Macronix will showcase ArmorBoot, along with a range of other NVM solutions, at the Future of Memory and Storage conference, August 5-7, at the Santa Clara Convention Center, in Santa Clara, California. Macronix will be located in booth #1142. About Macronix Macronix, a leading integrated device manufacturer in the non-volatile memory (NVM) market, provides a full range of NOR Flash, NAND Flash, and ROM products. With its world-class R&D and manufacturing capability, Macronix continues to deliver high-quality, innovative and performance-driven products to its customers in the consumer, communication, computing, automotive, networking and other market segments. Find out more at ArmorFlash is a trademark of Macronix International Co., Ltd. eMMC is a trademark of JEDEC/MMCA View original content: SOURCE Macronix Sign in to access your portfolio

Trump Media Is Testing an AI Search Engine Powered by Perplexity
Trump Media Is Testing an AI Search Engine Powered by Perplexity

CNET

time3 minutes ago

  • CNET

Trump Media Is Testing an AI Search Engine Powered by Perplexity

President Donald Trump's media company, Trump Media, is beta-testing a new AI search feature, Truth Search AI, on the Truth Social platform. The Florida-based company announced the news on Wednesday in a press release. Trump Media and Technology Group is perhaps best known for its social-media program Truth Social. The company is separate from the New York-based Trump Organization. "We're proud to partner with Perplexity to launch our public Beta testing of Truth Social AI, which will make Truth Social an even more vital element in the Patriot Economy," Trump Media CEO Devin Nunes said in the statement. "We plan to robustly refine and expand our search function based on user feedback as we implement a wide range of additional enhancements to the platform." Truth Search AI is now available on the Web version of Truth Social and will begin public beta testing on the Truth Social iOS and Android apps at an unnamed future date. Representatives for Trump Media and Perplexity didn't immediately respond to a request for comment. Will results be politically biased? In today's divided political landscape, one immediate concern is that a search engine from a conservative president's media company will select only search results that favor conservative opinions. UAE state-owned newspaper The National conducted searches using the new product and reported that the AI-generated answers, perhaps unsurprisingly, source conservative-leaning media outlets. But 404Media was able to get some possibly surprising results. When reporters asked how the American economy is doing, the new search engine said it was "currently facing significant headwinds, with signs of slowdown." The media outlet pressed further, asking if the president's international tariffs are to blame. "Recent tariff increases in the United States have generally had a negative effect on economic growth and employment, raising costs for businesses and consumers while providing only limited benefits to some manufacturing sectors," Truth Search AI replied. Read more: What Is Perplexity? Here's Everything You Need to Know About This AI Chatbot Perplexity's history San Francisco-based Perplexity was founded in 2022. As CNET noted in a review, it calls itself the world's first "answer engine," and instead of showing a list of links, it pulls info directly from the sources and summarizes that information. The company has made headlines for how it acquires its content. In June, the BBC threatened to sue Perplexity for unauthorized use of its content, alleging the artificial intelligence company reproduced BBC material "verbatim." At the time, Perplexity gave a statement to the Financial Times calling the BBC's claims "manipulative and opportunistic" and that the broadcasting giant fundamentally doesn't understand how the technology, internet or IP law works. Perplexity also alleged that the threat of litigation shows "how far the BBC is willing to go to preserve Google's illegal monopoly for its own self-interest." As 404Media notes, Forbes, the New York Times, New York Post and the Dow Jones have all accused Perplexity of plagiarism, and News Corp's Dow Jones & Co., which publishes the Wall Street Journal and the New York Post sued Perplexity in 2024 for copyright infringement.

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why
AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

CNET

time33 minutes ago

  • CNET

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

Chatbots can be genuinely impressive when you watch them do things they're good at, like writing realistic-sounding text or creating weird futuristic-looking images. But try to ask generative AI to solve one of those puzzles you find in the back of a newspaper, and things can quickly go off the rails. That's what researchers at the University of Colorado Boulder found when they challenged different large language models to solve Sudoku. And not even the standard 9x9 puzzles. An easier 6x6 puzzle was often beyond the capabilities of an LLM without outside help (in this case, specific puzzle-solving tools). The more important finding came when the models were asked to show their work. For the most part, they couldn't. Sometimes they lied. Sometimes they explained things in ways that made no sense. Sometimes they hallucinated and started talking about the weather. If gen AI tools can't explain their decisions accurately or transparently, that should cause us to be cautious as we give these things more and more control over our lives and decisions, said Ashutosh Trivedi, a computer science professor at the University of Colorado at Boulder and one of the authors of the paper published in July in the Findings of the Association for Computational Linguistics. "We would really like those explanations to be transparent and be reflective of why AI made that decision, and not AI trying to manipulate the human by providing an explanation that a human might like," Trivedi said. When you make a decision, you can at least try to justify it or explain how you arrived at it. That's a foundational component of society. We are held accountable for the decisions we make. An AI model may not be able to accurately or transparently explain itself. Would you trust it? Why LLMs struggle with Sudoku We've seen AI models fail at basic games and puzzles before. OpenAI's ChatGPT (among others) has been totally crushed at chess by the computer opponent in a 1979 Atari game. A recent research paper from Apple found that models can struggle with other puzzles, like the Tower of Hanoi. It has to do with the way LLMs work and fill in gaps in information. These models try to complete those gaps based on what happens in similar cases in their training data or other things they've seen in the past. With a Sudoku, the question is one of logic. The AI might try to fill each gap in order, based on what seems like a reasonable answer, but to solve it properly, it instead has to look at the entire picture and find a logical order that changes from puzzle to puzzle. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Chatbots are bad at chess for a similar reason. They find logical next moves but don't necessarily think three, four or five moves ahead. That's the fundamental skill needed to play chess well. Chatbots also sometimes tend to move chess pieces in ways that don't really follow the rules or put pieces in meaningless jeopardy. You might expect LLMs to be able to solve Sudoku because they're computers and the puzzle consists of numbers, but the puzzles themselves are not really mathematical; they're symbolic. "Sudoku is famous for being a puzzle with numbers that could be done with anything that is not numbers," said Fabio Somenzi, a professor at CU and one of the research paper's authors. I used a sample prompt from the researchers' paper and gave it to ChatGPT. The tool showed its work, and repeatedly told me it had the answer before showing a puzzle that didn't work, then going back and correcting it. It was like the bot was turning in a presentation that kept getting last-second edits: This is the final answer. No, actually, never mind, this is the final answer. It got the answer eventually, through trial and error. But trial and error isn't a practical way for a person to solve a Sudoku in the newspaper. That's way too much erasing and ruins the fun. AI and robots can be good at games if they're built to play them, but general-purpose tools like large language models can struggle with logic puzzles. Ore Huiying/Bloomberg via Getty Images AI struggles to show its work The Colorado researchers didn't just want to see if the bots could solve puzzles. They asked for explanations of how the bots worked through them. Things did not go well. Testing OpenAI's o1-preview reasoning model, the researchers saw that the explanations -- even for correctly solved puzzles -- didn't accurately explain or justify their moves and got basic terms wrong. "One thing they're good at is providing explanations that seem reasonable," said Maria Pacheco, an assistant professor of computer science at CU. "They align to humans, so they learn to speak like we like it, but whether they're faithful to what the actual steps need to be to solve the thing is where we're struggling a little bit." Sometimes, the explanations were completely irrelevant. Since the paper's work was finished, the researchers have continued to test new models released. Somenzi said that when he and Trivedi were running OpenAI's o4 reasoning model through the same tests, at one point, it seemed to give up entirely. "The next question that we asked, the answer was the weather forecast for Denver," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Explaining yourself is an important skill When you solve a puzzle, you're almost certainly able to walk someone else through your thinking. The fact that these LLMs failed so spectacularly at that basic job isn't a trivial problem. With AI companies constantly talking about "AI agents" that can take actions on your behalf, being able to explain yourself is essential. Consider the types of jobs being given to AI now, or planned for in the near future: driving, doing taxes, deciding business strategies and translating important documents. Imagine what would happen if you, a person, did one of those things and something went wrong. "When humans have to put their face in front of their decisions, they better be able to explain what led to that decision," Somenzi said. It isn't just a matter of getting a reasonable-sounding answer. It needs to be accurate. One day, an AI's explanation of itself might have to hold up in court, but how can its testimony be taken seriously if it's known to lie? You wouldn't trust a person who failed to explain themselves, and you also wouldn't trust someone you found was saying what you wanted to hear instead of the truth. "Having an explanation is very close to manipulation if it is done for the wrong reason," Trivedi said. "We have to be very careful with respect to the transparency of these explanations."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store