Quantum Computers Simulate Particle 'String Breaking' in a Physics Breakthrough
Subatomic particles such as quarks can pair up when linked by 'strings' of force fields — and release energy when the strings are pulled to the point of breaking. Two teams of physicists have now used quantum computers to mimic this phenomenon and watch it unfold in real time.
The results, described in two Nature papers on June 4, are the latest in a series of breakthroughs towards using quantum computers for simulations that are beyond the ability of any ordinary computers.
'String breaking is a very important process that is not yet fully understood from first principles,' says Christian Bauer, a physicist at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California. Physicists can calculate the final results of particle collisions that form or break strings using classical computers, but cannot fully simulate what happens in between. The success of the quantum simulations is 'incredibly encouraging,' Bauer says.
[Sign up for Today in Science, a free daily newsletter]
Each experiment was conducted by an international collaboration involving academic and industry researchers — one team at QuEra Computing, a start-up company in Cambridge, Massachusetts, and another at the Google Quantum AI Lab in Santa Barbara, California.
The researchers using QuEra's Aquila machine encoded information in atoms that were arranged in a 2D honeycomb pattern, each suspended in place by an optical 'tweezer'. The quantum state of each atom — a qubit that could be excited or relaxed — represented the electric field at a point in space, explains co-author Daniel González-Cuadra, a theoretical physicist now at the Institute for Theoretical Physics in Madrid. In the other experiment, researchers encoded the 2D quantum field in the states of superconducting loops on Google's Sycamore chip.
The teams used diametrically opposite quantum-simulation philosophies. The atoms in Aquila were arranged so that the electrostatic forces between them mimicked the behaviour of the electric field, and continuously evolved towards their own states of lower energy — an approach called analogue quantum simulation. The Google machine was instead used as a 'digital' quantum simulator: the superconducting loops were made to follow the evolution of the quantum field 'by hand', through a discrete sequence of manipulations.
In both cases, the teams set up strings in the field that effectively acted like rubber bands connecting two particles. Depending on how the researchers tuned the parameters, the strings could be stiff or wobbly, or could break up. 'In some cases, the whole string just dissolves: the particles become deconfined,' says Frank Pollmann, a physicist at the Technical University of Munich (TUM) in Garching, Germany, who helped to lead the Google experiment.
Although simulating strings in a 2D electric field could have applications for studying the physics of materials, it is still a long way from fully simulating high-energy interactions, such as those that occur in particle colliders, which are in 3D and involve the much more complex strong nuclear force. 'We do not have a clear path at this point how to get there,' says Monika Aidelsburger, a physicist at the Max Planck Institute of Quantum Optics in Munich, Germany.
Still, the latest results are exciting, and progress in quantum simulation in general has been 'really amazing and very fast,' Aidelsburger says.
Last year, Bauer and his LBNL colleague Anthony Ciavarella were among the first teams to simulate the strong nuclear force on a quantum computer. Approaches that replace qubits with qudits — which can have more than two quantum states and can be more realistic representations of a quantum field — could also make simulations more powerful, researchers say.
This article is reproduced with permission and was first published on June 5, 2025.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
34 minutes ago
- CNET
ChatGPT Glossary: 52 AI Terms Everyone Should Know
AI is now a part of our everyday lives. From the massive popularity of ChatGPT to Google cramming AI summaries at the top of its search results, AI is completely taking over the internet. With AI, you can get instant answers to pretty much any question. It can feel like talking to someone who has a Ph.D. in everything. But that aspect of AI chatbots is only one part of the AI landscape. Sure, having ChatGPT help do your homework or having Midjourney create fascinating images of mechs based on country of origin is cool, but the potential of generative AI could completely reshape economies. That could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence. It's showing up in a dizzying array of products -- a short, short list includes Google's Gemini, Microsoft's Copilot, Anthropic's Claude, the Perplexity You can read our reviews and hands-on evaluations of those and other products, along with news, explainers and how-to posts, at our AI Atlas hub. As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you're trying to sound smart over drinks or impress in a job interview, here are some important AI terms you should know. This glossary is regularly updated. artificial general intelligence, or AGI: A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its own capabilities. agentive: Systems or models that exhibit agency with the ability to autonomously pursue actions to achieve a goal. In the context of AI, an agentive model can act without constant supervision, such as an high-level autonomous car. Unlike an "agentic" framework, which is in the background, agentive frameworks are out front, focusing on the user experience. AI ethics: Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias. AI safety: An interdisciplinary field that's concerned with the long-term impacts of AI and how it could progress suddenly to a super intelligence that could be hostile to humans. algorithm: A series of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, to then learn from it and accomplish tasks on its own. alignment: Tweaking an AI to better produce the desired outcome. This can refer to anything from moderating content to maintaining positive interactions toward humans. anthropomorphism: When humans tend to give nonhuman objects humanlike characteristics. In AI, this can include believing a chatbot is more humanlike and aware than it actually is, like believing it's happy, sad or even sentient altogether. artificial intelligence, or AI: The use of technology to simulate human intelligence, either in computer programs or robotics. A field in computer science that aims to build systems that can perform human tasks. autonomous agents: An AI model that have the capabilities, programming and other tools to accomplish a specific task. A self-driving car is an autonomous agent, for example, because it has sensory inputs, GPS and driving algorithms to navigate the road on its own. Stanford researchers have shown that autonomous agents can develop their own cultures, traditions and shared language. bias: In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes. chatbot: A program that communicates with humans through text that simulates human language. ChatGPT: An AI chatbot developed by OpenAI that uses large language model technology. cognitive computing: Another term for artificial intelligence. data augmentation: Remixing existing data or adding a more diverse set of data to train an AI. dataset: A collection of digital information used to train, test and validate an AI model. deep learning: A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in pictures, sound and text. The process is inspired by the human brain and uses artificial neural networks to create patterns. diffusion: A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo. emergent behavior: When an AI model exhibits unintended abilities. end-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It's not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once. ethical considerations: An awareness of the ethical implications of AI and issues related to privacy, data usage, fairness, misuse and other safety issues. foom: Also known as fast takeoff or hard takeoff. The concept that if someone builds an AGI that it might already be too late to save humanity. generative adversarial networks, or GANs: A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it's authentic. generative AI: A content-generating technology that uses AI to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material. Google Gemini: An AI chatbot by Google that functions similarly to ChatGPT but pulls information from the current web, whereas ChatGPT is limited to data until 2021 and isn't connected to the internet. guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn't create disturbing content. hallucination: An incorrect response from AI. Can include generative AI producing answers that are incorrect but stated with confidence as if correct. The reasons for this aren't entirely known. For example, when asking an AI chatbot, "When did Leonardo da Vinci paint the Mona Lisa?" it may respond with an incorrect statement saying, "Leonardo da Vinci painted the Mona Lisa in 1815," which is 300 years after it was actually painted. inference: The process AI models use to generate text, images and other content about new data, by inferring from their training data. large language model, or LLM: An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language. latency: The time delay from when an AI system receives an input or prompt and produces an output. machine learning, or ML: A component in AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be coupled with training sets to generate new content. Microsoft Bing: A search engine by Microsoft that can now use the technology powering ChatGPT to give AI-powered search results. It's similar to Google Gemini in being connected to the internet. multimodal AI: A type of AI that can process multiple types of inputs, including text, images, videos and speech. natural language processing: A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models and linguistic rules. neural network: A computational model that resembles the human brain's structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time. overfitting: Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data but not new data. paperclips: The Paperclip Maximiser theory, coined by philosopher Nick Boström of the University of Oxford, is a hypothetical scenario where an AI system will create as many literal paperclips as possible. In its goal to produce the maximum amount of paperclips, an AI system would hypothetically consume or convert all materials to achieve its goal. This could include dismantling other machinery to produce more paperclips, machinery that could be beneficial to humans. The unintended consequence of this AI system is that it may destroy humanity in its goal to make paperclips. parameters: Numerical values that give LLMs structure and behavior, enabling it to make predictions. Perplexity: The name of an AI-powered chatbot and search engine owned by Perplexity AI. It uses a large language model, like those found in other AI chatbots, to answer questions with novel answers. Its connection to the open internet also allows it to give up-to-date information and pull in results from around the web. Perplexity Pro, a paid tier of the service, is also available and uses other models, including GPT-4o, Claude 3 Opus, Mistral Large, the open-source LlaMa 3 and its own Sonar 32k. Pro users can additionally upload documents for analysis, generate images, and interpret code. prompt: The suggestion or question you enter into an AI chatbot to get a response. prompt chaining: The ability of AI to use information from previous interactions to color future responses. quantization: The process by which an AI large learning model is made smaller and more efficient (albeit, slightly less accurate) by lowering its precision from a higher format to a lower format. A good way to think about this is to compare a 16-megapixel image to an 8-megapixel image. Both are still clear and visible, but the higher resolution image will have more detail when you're zoomed in. stochastic parrot: An analogy of LLMs that illustrates that the software doesn't have a larger understanding of meaning behind language or the world around it, regardless of how convincing the output sounds. The phrase refers to how a parrot can mimic human words without understanding the meaning behind them. style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso. temperature: Parameters set to control how random a language model's output is. A higher temperature means the model takes more risks. text-to-image generation: Creating images based on textual descriptions. tokens: Small bits of written text that AI language models process to formulate their responses to your prompts. A token is equivalent to four characters in English, or about three-quarters of a word. training data: The datasets used to help AI models learn, including text, images, code or data. transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context. turing test: Named after famed mathematician and computer scientist Alan Turing, it tests a machine's ability to behave like a human. The machine passes if a human can't distinguish the machine's response from another human. unsupervised learning: A form of machine learning where labeled training data isn't provided to the model and instead the model must identify patterns in data by itself. weak AI, aka narrow AI: AI that's focused on a particular task and can't learn beyond its skill set. Most of today's AI is weak AI. zero-shot learning: A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers.


New York Times
38 minutes ago
- New York Times
The Ad Industry's A.I. Reckoning
The Cannes Lions Festival of Creativity, an ad industry conference, awards show and schmoozefest that begins on Monday, is known for its beaches and its sunny atmosphere. This year, attendees can also expect an existential debate. For years, ad executives have understood a technological takeover of ad creative was coming, and with it, a power shift from creative agencies to the tech giants that control the platforms. Now, with companies like Google, Pinterest, Snap and Amazon steadily adding powerful A.I. ad creation tools, it feels uncomfortably near. Anxieties peaked this spring with reports that Meta would soon allow advertisers to make A.I.-generated ads without any agency involvement. Some believe agencies can effectively embrace generative A.I. (Agencies are a 'critical partner' and 'an essential channel for the industry,' Dave Dugan, Meta's vice president of global clients and agencies, said when asked about the company's new A.I. tools.) Others warn agencies will be displaced by it. 'The advertising world might be at their funeral without even realizing it,' said Geoffrey Colon, an entrepreneur who spent two decades at creative agencies and tech giants. To sum it up, he said: 'Iceberg ahead.' Agencies are dependent on tech platforms for distribution. For years, Madison Avenue's relationship with Silicon Valley has been somewhere between symbiotic and codependent, with agencies and tech companies working together to buy, sell and serve ads across various platforms. Want all of The Times? Subscribe.
Yahoo
an hour ago
- Yahoo
I Asked ChatGPT How To Invest Like a Rich Person: Here's What It Said
Many folks are beginning to use ChatGPT instead of Google when looking for information. They're switching because ChatGPT is quicker and more specific. People are also using ChatGPT for financial assistance and even personal financial analysis. Is this a good idea? We wouldn't go that far but it can be a good starting point. Find Out: Read Next: GOBankingRates put the generative AI chatbot to the test, asking how to invest like a rich person. Here's what ChatGPT said in response. The first tip the chatbot shared is to think long term. This is really the only smart way to approach investing and is heavily recommended by the best investors, including Warren Buffett. ChatGPT posted the following points here. Prioritize capital preservation and steady growth. Embrace long-term investing horizons. Let compound interest work for decades. These are all astute points, but ChatGPT doesn't get deep with them. It should at least, however briefly, discuss what compound interest is, as that's the key benefit of a long-term investing strategy. The Consumer Financial Protection Bureau defines compound interest as 'when you earn interest on the money you've saved and on the interest you earn along the way.' For a more human-sounding explanation, consider Buffett's analogy. He likens compound interest to a snowball rolling down a long hill, collecting more snow as it picks up speed, eventually becoming a massive snowball. Your investment is the snowball, and time is the hill. Learn More: Another salient tip from ChatGPT: Diversify strategically. Rich people swear by investing across numerous categories, including: Stocks and bonds Private equity and venture capital Real estate Alternative assets like crypto This is all accurate, but there's more to strategic diversification. You should know the purpose of this and why it's important. Diversification reduces your risk should a stock (or the market at large) tank. Nothing eliminates risk in the investing world, but diversification is the tool to manage it. Here's another principle that Buffett insists on implementing in your investment strategy. Invest in what you understand. Never buy stock solely because you hear it's hot right now or because you believe in the company behind it. The latter is important, but research is more important. ChatGPT highlighted the following instructions. Invest in sectors they know well (e.g., tech, real estate, private equity). Do deep due diligence before investing. But there's a bit more to this advice. Investing in what you know is not a one-time deal. Throughout your investment journey, you should be keeping in the know of what's happening behind the scenes of your investments. For example, if a company you're heavily backing is experiencing calamity in its leadership, be aware of that. This doesn't mean you should bail, but you do want to have your finger on the pulse, always. Yes, ChatGPT, this is a really important point. If you want to invest like a rich person, you need to be taking full advantage of tax-advantaged accounts, including the following: IRAs 401(k) plans HSAs Trusts LLCs Ultra-wealthy people usually own assets such as real estate or businesses that generate passive income. ChatGPT highlighted the importance of owning assets if you want to invest like a rich person. ChatGPT instructed readers to be strategic with debt, saying, 'Use good debt to invest (e.g., mortgages, business loans); avoid high-interest consumer debt.' The second point is nonnegotiable. Carrying high-interest debt is a major no-no no matter your desired wealth status. But the first part, about using 'good debt' to invest is controversial advice. We mustn't assume any mortgage or business loan debt is 'good.' All debt carries risk and can become 'bad,' so you need to work with a financial advisor to build a plan where debt works in your favor. Right on time, ChatGPT provided this tip: 'Use financial planners, tax advisors, lawyers and estate planners,' the chatbot said. You 100% need to do this. We often read this as the last tip, but it probably should be the first one, if you really want to invest like a rich person. More From GOBankingRates 10 Unreliable SUVs To Stay Away From Buying This article originally appeared on I Asked ChatGPT How To Invest Like a Rich Person: Here's What It Said Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data