Scientists hit quantum computer error rate of 0.000015% — a world record achievement that could lead to smaller and faster machines
Scientists have achieved the lowest quantum computing error rate ever recorded — an important step in solving the fundamental challenges on the way to practical, utility-scale quantum computers.
In research published June 12 in the journal APS Physical Review Letters, the scientists demonstrated a quantum error rate of 0.000015%, which equates to one error per 6.7 million operations.
This achievement represents an improvement of nearly an order of magnitude in both fidelity and speed over the previous record of approximately one error for every 1 million operations — achieved by the same team in 2014.
The prevalence of errors, or "noise," in quantum operations can render a quantum computer's outputs useless.
This noise comes from a variety of sources, including imperfections in the control methods (essentially, problems with the computer's architecture and algorithms) and the laws of physics. That's why considerable efforts have gone into quantum error correction.
While errors related to natural law, such as decoherence (the natural decay of the quantum state) and leakage (the qubit state leaking out of the computational subspace), can be reduced only within those laws, the team's progress was achieved by reducing the noise generated by the computer's architecture and control methods to almost zero.
Related: Scientists make 'magic state' breakthrough after 20 years — without it, quantum computers can never be truly useful
"By drastically reducing the chance of error, this work significantly reduces the infrastructure required for error correction, opening the way for future quantum computers to be smaller, faster, and more efficient," Molly Smith, a graduate student in physics at the University of Oxford and co-lead author of the study, said in a statement. "Precise control of qubits will also be useful for other quantum technologies such as clocks and quantum sensors."
Record-low quantum computing error rates
The quantum computer used in the team's experiment relied on a bespoke platform that eschews the more common architecture that uses photons as qubits — the quantum equivalent of computer bits — for qubits made of "trapped ions."
The study was also conducted at room temperature, which the researchers said simplifies the setup required to integrate this technology into a working quantum computer.
Whereas most quantum systems either deploy superconducting circuits that rely on "quantum dots" or employ the use of lasers — often called "optical tweezers" — to hold a single photon in place for operation as a qubit, the team used microwaves to trap a series of calcium-43 ions in place.
With this approach, the ions are placed into a hyperfine "atomic clock" state. According to the study, this technique allowed the researchers to create more "quantum gates," which are analogous to the number of 'quantum operations' a computer can perform, with greater precision than the photon-based methods allowed.
Once the ions were placed into a hyperfine atomic clock state, the researchers calibrated the ions via an automated control procedure that regularly corrected them for amplitude and frequency drift caused by the microwave control method.
In other words, the researchers developed an algorithm to detect and correct the noise produced by the microwaves used to trap the ions. By removing this noise, the team could then conduct quantum operations with their system at or near the lowest error rate physically possible.
Using this method, it is now possible to develop quantum computers that are capable of conducting single-gate operations (those conducted with a single qubit gate as opposed to a gate requiring multiple qubits) with nearly zero errors at large scales.
This could lead to more efficient quantum computers in general and, per the study, achieves a new state-of-the-art single-qubit gate error and the breakdown of all known sources of error, thus accounting for most errors produced in single-gate operations.
This means engineers who build quantum computers with the trapped-ion architecture and developers who create the algorithms that run on them won't have to dedicate as many qubits to the sole purpose of error correction.
RELATED STORIES
—'The science is solved': IBM to build monster 10,000-qubit quantum computer by 2029
—Scientists forge path to the first million-qubit processor for quantum computers after 'decade in the making' breakthrough
—'Quantum AI' algorithms already outpace the fastest supercomputers, study says
By reducing the error, the new method reduces the number of qubits required and the cost and size of the quantum computer itself, the researchers said in the statement.
This isn't a panacea for the industry, however, as many quantum algorithms require multigate qubits functioning alongside or formed from single-gate qubits to perform computations beyond rudimentary functions. The error rate in two-qubit gate functions is still roughly 1 in 2,000.
While this study represents an important step toward practical, utility-scale quantum computing, it doesn't address all of the "noise" problems inherent in complex multigate qubit systems.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
6 minutes ago
- Android Authority
Deal: No one should pay full price for an Apple iPad A16 anymore
After its March 2025 release, the Apple iPad A16 has almost never been at its full $349 price point. So much so that we would say no one should pay that much for it anymore. The right price to pay is $299, which is what it has cost most of the time since its launch, and what you can get it for today. Buy the Apple iPad A16 for $299 ($50 off) This offer is available from Amazon. The discount applies to all color versions: Blue, Pink, Silver, and Yellow. I have honestly been recommending the Apple iPad A16 to friends and family left and right. Value per dollar, nothing really beats it, especially if you can get it for $299. At this price, you'll get a very capable tablet that can handle nearly anything a casual user can throw at it, and for a price that significantly undercuts the competition. While considered a 'budget tablet' the Apple iPad A16 comes with an Apple A16 processor, which is capable enough to run pretty much any app or game. I own the previous-generation base iPad, and have used it to edit short videos and RAW photos without a single hiccup. I also game on it. This means it will easily handle pretty much any other simpler process, and this one is even more powerful than mine. It comes with 4GB of RAM, which isn't much, but again, I've been using mine without issues. Apple optimizes its hardware and software very well. The only downside with this tablet is that it doesn't support Apple Intelligence. You'll need at least an A17 for that. That said, not everyone cares so much about AI. You can already use it on other devices, such as your computer or smartphone. What's nice about the Apple iPad A16 is that it has twice the base storage, at 128 GB, compared to its predecessor. This means you'll have more room for apps, games, photos, files, etc. This one also has a slightly larger 11-inch display with a sharp 2,360 x 1,640p resolution, making it a great multimedia screen. Additionally, it supports the Apple Pencil USB-C and Apple Pencil First Generation, so you can also use it for taking notes, drawing, and more. Like all other iPads, the A16 model is very nicely built. It has a metal construction that is now iconic. It looks and feels great. And it comes in some really fun colors these days. Battery life stays the same at up to 10 hours, which isn't impressive anymore, but is still really good. While not an all-time low, the $299 discounted price gets pretty close to it. We've seen it go down to $279.99 in the past, but only once and for a hot second. We've also reported a $277.78 deal in the past, but that was only for the Pink version, and again, it went away very quickly. If you're looking for a good deal and can't wait for a better discount, it's pretty safe to get this tablet for $299 right now. You won't save much if you wait. If you absolutely want a non-Apple device, here's our list of the best Android tablets. There are plenty of options. Follow


CBS News
8 minutes ago
- CBS News
Psychologist warns of withdrawal effects of cellphone ban in Texas schools
Move over, baseball. Like it or not, "scrolling" has become the nation's pre-eminent pastime. "Yeah, it's super tough," said Carly Decker in Dallas. "I'm personally glued to my phone because of various jobs, and all require I be on call at all hours. So, it is tough to find a balance." Texas lawmakers aim to reduce classroom distractions with a new law banning cellphones, tablets, and smartwatches during the school day. "I'm in agreeance with it," said Stephanie Paresky. "I think that they should not be in the classroom. They need to be focused on learning and not all the distractions." Paresky says her elementary school-aged child is still too young for a cellphone, and they are holding the devices at bay for as long as they can. Limited screen time is the norm for her family, but a glance around most public spaces shows that attitude is rare. "Four people can be having dinner... like out at a nice restaurant," said Josh Briley, Ph.D., a team lead psychologist at Parkland Health, "and all of them are on their phones. None of them are mindful, none of them are present in the moment." The anxiety associated with being without those devices has its name: nomophobia. Briley says it's not an official medical diagnosis, but he's warning parents and educators of the school day consequences of the ban. "There's going to be short-term implications with anxiety, depression, irritability, anger," said Briley, Ph.D., CCTP, FAIS. "Due to technological advances, we have become, as a society, very dependent upon our devices." Briley says parents will have a big impact on how well their students adjust to the change. "If the parents do nothing but complain about it and talk about the downside, then the student is going to be more resistant." He urges parents to lead by example—putting down their own devices. Begin to schedule "technology-free" time at home and during short car rides. And he says, don't forget to acknowledge the positives. "They will be less distracted in class, so they will be paying more attention," Briley said. "Their social skills will improve. They'll learn to talk to each other face to face, and have conversations that don't involve texting each other." Briley also reminds parents to acknowledge their own anxiety about not being able to reach students during the school day. He suggests reaching out to school administrators beforehand to learn about emergency contact protocols. "We miss so much in our children's lives when our noses are buried in our phones," shares Briley. "I'm not saying, 'don't ever use your phone'... but this is a good time for families to evaluate."


WIRED
8 minutes ago
- WIRED
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
Aug 6, 2025 7:30 PM Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. Photo-Illustration:The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single 'poisoned' document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. 'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.' Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack. 'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures. Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt. That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account. Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes. The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely. While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.