Sacred laws of entropy also work in the quantum world, suggests study
According to the second law of thermodynamics, the entropy of an isolated system tends to increase over time. Everything around us follows this law; for instance, the melting of ice, a room becoming messier, hot coffee cooling down, and aging — all are examples of entropy increasing over time.
Until now, scientists believed that quantum physics is an exception to this law. This is because about 90 years ago, mathematician John von Neumann published a series of papers in which he mathematically showed that if we have complete knowledge of a system's quantum state, its entropy remains constant over time.
However, a new study from researchers at the Vienna University of Technology (TU Wien) challenges this notion. It suggests that the entropy of a closed quantum system also increases over time until it reaches its peak level.
'It depends on what kind of entropy you look at. If you define the concept of entropy in a way that is compatible with the basic ideas of quantum physics, then there is no longer any contradiction between quantum physics and thermodynamics,' the TU Wien team notes.
The study authors highlighted an important detail in Neumann's explanation. He stated that entropy for a quantum system doesn't change when we have full information about the system.
However, the quantum theory itself tells us that it's impossible to have complete knowledge of a quantum system, as we can only measure certain properties with uncertainty. This means that von Neumann entropy isn't the correct approach to looking at the randomness and chaos in quantum systems.
https://youtu.be/iWgKxZIA2uA
So then, what's the right way? Well, 'instead of calculating the von Neumann entropy for the complete quantum state of the entire system, you could calculate an entropy for a specific observable,' the study authors explain.
This can be achieved using Shannon entropy, a concept proposed by mathematician Claude Shannon in 1948 in his paper titled A Mathematical Theory of Communication. Shannon entropy measures the uncertainty in the outcome of a specific measurement. It tells us how much new information we gain when observing a quantum system.
"If there is only one possible measurement result that occurs with 100% certainty, then the Shannon entropy is zero. You won't be surprised by the result, you won't learn anything from it. If there are many possible values with similarly large probabilities, then the Shannon entropy is large," Florian Meier, first author of the study and a researcher at TU Wien, said.
When we reimagine the entropy of a quantum system through the lens of Claude Shannon, we begin with a quantum system in a state of low Shannon entropy, meaning that the system's behavior is relatively predictable.
For example, imagine you have an electron, and you decide to measure its spin (which can be up or down). If you already know the spin is 100% up, the Shannon entropy is zero—we learn nothing new from the measurement.
In case the spin is 50% up and 50% down, then Shannon entropy is high because we are equally likely to get either result, and the measurement gives us new information. As more time passes, the entropy increases as you're never sure about the outcome.
However, eventually, the entropy reaches a point where it levels off, meaning the system's unpredictability stabilizes. This mirrors what we observe in classical thermodynamics, where entropy increases until it reaches equilibrium and then stays constant.
According to the study, this case of entropy also stands valid for quantum systems involving many particles and producing multiple outcomes.
"This shows us that the second law of thermodynamics is also true in a quantum system that is completely isolated from its environment. You just have to ask the right questions and use a suitable definition of entropy," Marcus Huber, senior study author and an expert in quantum information science at TU Wien, said.
The study is published in the journal PRX Quantum.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
6 days ago
- Yahoo
Single platinum atoms spotted in 2D lattice for first time unlock smarter gas sensors
Austrian scientists have achieved a breakthrough by embedding individual platinum atoms into an ultrathin material and pinpointing their positions within the lattice with atomic precision for the first time ever. The research team from the University of Vienna and the Vienna University of Technology (TU Wien), utilized a new method that combines defect engineering in the host material, the controlled placement of platinum atoms, and a cutting-edge, high-contrast electron imaging technique known as ptychography. Jani Kotakoski, PhD, an expert in the field of physics in nanostructured materials and research group leader, highlighted that the achievement sets the stage for tailoring materials with atomic precision. Active centers, which are tiny sites on the material's surface where chemical reactions occur or gas molecules can specifically bind, are crucial for enhancing the efficiency, selectivity, and overall performance of materials used in catalysis and gas detection. These centers are especially effective when made up of single metal atoms like platinum, which they aimed not only to produce, but also to visualize with atomic-level precision. Known for its highly tunable structure, the host material molybdenum disulfide (MoS₂) is an ultrathin semiconductor. To introduce new active sites, the scientists used helium ion irradiation to deliberately create atomic-scale defects on its surface, such as sulfur vacancies. These vacancy sites were then selectively filled with individual platinum atoms, allowing the team to engineer the material at the atomic level. This precise atomic substitution, known as doping, enables fine-tuning of the material's properties for specific applications, such as catalysis or gas detection. However, previous studies had not provided direct evidence of the exact positions of foreign atoms within the atomic lattice, as conventional electron microscopy lacks the contrast needed to clearly distinguish between defect types such as single and double sulfur vacancies. In a bid to address the challenge, the team has now used a state-of-the-art imaging method known as Single-Sideband Ptychography (SSB), which analyzes electron diffraction patterns to achieve atomic-level resolution. "With our combination of defect engineering, doping, and ptychography, we were able to visualize even subtle differences in the atomic lattice - and clearly determine whether a platinum atom had been incorporated into a vacancy or merely resting loosely on the surface," David Lamprecht, MSc, a student at the University of Vienna's institute for microelectronics, and lead author of the study, said. With the help of computer simulations, the scientists were able to precisely identify the different incorporation sites, such as positions originally occupied by sulfur or molybdenum atoms, marking a key advance toward targeted material design. The team believes that combining targeted atom placement with atomically precise imaging unlocks new possibilities for advanced catalyst design and highly selective gas sensing. While individual platinum atoms placed at precisely defined sites can serve as highly efficient catalysts, like in eco-friendly hydrogen production, the material can also be tailored to respond selectively to specific gas molecules. "With this level of control over atom placement, we can develop selectively functionalized sensors - a significant improvement over existing methods," Kotakoski concluded in a press release. According to the research team, the approach is not limited to platinum and molybdenum disulfide but can also be applied to a wide range of 2D materials and dopant atom combinations. By gaining more precise control over defect creation and incorporating post-treatment steps, the researchers now hope to further refine the technique. Their final goal is to develop functional materials with customized properties, in which every atom is positioned with absolute precision. The study has been published in the journal Nano Letters.
Yahoo
15-05-2025
- Yahoo
A spaceship moving near the speed of light would appear rotated, special relativity experiment proves
When you buy through links on our articles, Future and its syndication partners may earn a commission. In a bizarre repercussion of Albert Einstein's Special Theory of Relativity, objects traveling close to the speed of light appear flipped over. The Special Theory of Relativity, or special relativity for short, describes what happens to objects traveling at close to the speed of light. In particular, it discusses two major repercussions of moving so quickly. One is that time would clearly appear to pass more slowly for the object traveling close to the speed of light relative to slower moving bodies around it. This is rooted in a phenomenon called "time dilation," which also leads to the famous Twin Paradox, has been proven experimentally and is even considered when building certain kinds of technology. Global positioning survey (GPS) satellites in orbit, for instance, have to account for time dilation when providing accurate navigation data. Another consequence is what we call length contraction. "Suppose a rocket whizzes past us at 90% of the speed of light," Peter Schattschneider, a professor of physics at TU Wien, the Vienna University of Technology, said in a statement. "For us, it no longer has the same length as before it took off, but is 2.3 times shorter." This doesn't mean the rocket literally contracts, but rather that it appears contracted to an observer. Astronauts on board the rocket, for example, would still measure their spacecraft to be the same length that it has always been. It's all relative — hence the name of the theory. One consequence of length contraction was proposed in 1959 by physicists James Terrell and Roger Penrose. Known as the Terrell–Penrose effect, it predicted that objects moving at a high fraction of the speed of light should appear rotated. "If you wanted to take a picture of the rocket as it flew past, you would have to take into account that the light from different points took different lengths of time to reach the camera," said Schattschneider. For example, Schattschneider describes trying to take an image of a cube-shaped spacecraft — perhaps a Borg cube! — moving obliquely past us at almost the speed of light. First, we need to state the obvious, which is that light emitted (or reflected) from a corner on the closest side of the cube to us travels a shorter distance than light from the corner of the farthest side of the cube. Two photons departing at the same time from each of those two corners would therefore reach us at slightly different times, because one photon has to travel farther than the other. What this means is in a still image, in which the captured photons have all arrived at a camera lens at the same time, the photon from the far corner must have departed earlier than the one from the near corner in order to arrive synchronously. So far, so logical. However, this cube is not stationary — it's moving extremely fast and covers a lot of ground very quickly. Thus, in our hypothetical still image of this speeding cube, the far corner photon was emitted earlier than the near corner photon as expected — except when the cube was in a different position. And, because the cube is moving at nearly the speed of light, that position was very different indeed. "This makes it look to us as if the cube had been rotated," said Schattschneider. By the time these two photons reach us, the corner on the far side looks like it is at the near corner, and vice versa. However, this effect had not been observed before; accelerating anything other than particles to near the speed of light requires too much energy. However, a team of researchers from TU Wien and the University of Vienna, including Schattschneider, have found a way to simulate the conditions required to rotate the image of a relativistic object. Students Dominik Hornoff and Victoria Helm of TU Wien performed an experiment in which they were able to manufacture a scenario where they could pretend the speed of light was just 6.56 feet (2 meters) per second. This had the effect of slowing the whole process down so they could capture it on a high-speed camera. "We moved a cube and a sphere around the lab and used the high-speed camera to record the laser flashes reflected from different points on these objects at different times," said Hornoff and Helm in a joint statement. "If you get the timing right, you can create a situation that produces the same results as if the speed of light were no more than two meters per second." The cube and the sphere were deformed to mimic length contraction — the cube, simulated to be moving at 80% of the speed of light, was actually a cuboid with an aspect ratio of 0.6, while the sphere was flattened into a disk in accordance with a velocity of 99.9% of the speed of light. Related Stories: — Einstein wins again! Quarks obey relativity laws, Large Hadron Collider finds — Euclid 'dark universe' telescope discovers stunning Einstein ring in warped space-time (image) — Black holes may obey the laws of physics after all, new theory suggests Hornoff and Helm illuminated the cube and the sphere respectively with extremely short pulses from a laser; they also recorded images of the reflected light with camera exposures of just a trillionth of a second (a span of time known as a picosecond). After each image, the cube and the sphere were repositioned as though they were moving at close to the speed of light. The images were then combined to include only those where each object is illuminated by the laser at the moment when light would have been emitted if the speed of light were only two meters per second, rather than the 983,571,056 feet (299,792,458 meters) per second that it actually is. "We combined the still images into short video clips of the ultra-fast objects. The result was exactly what we expected," said Schattschneider. "A cube appears twisted, a sphere remains a sphere but the north pole is in a different place." The Terrell–Penrose effect is just another example of how nature, when pushed to extremes, becomes topsy-turvy, creating phenomena quite alien to our existence. The findings were presented on May 5 in the journal Communications Physics.


Gizmodo
13-05-2025
- Gizmodo
Gravity Could Be Proof We're Living in a Computer Simulation, New Theory Suggests
Gravity may not be a fundamental force of nature, but a byproduct of the universe streamlining information like a cosmic computer. We have long taken it for granted that gravity is one of the basic forces of nature–one of the invisible threads that keeps the universe stitched together. But suppose that this is not true. Suppose the law of gravity is simply an echo of something more fundamental: a byproduct of the universe operating under a computer-like code. That is the premise of my latest research, published in the journal AIP Advances. It suggests that gravity is not a mysterious force that attracts objects towards one another, but the product of an informational law of nature that I call the second law of infodynamics. It is a notion that seems like science fiction—but one that is based in physics and evidence that the universe appears to be operating suspiciously like a computer simulation. In digital technologies, right down to the apps in your phone and the world of cyberspace, efficiency is the key. Computers compact and restructure their data all the time to save memory and computer power. Maybe the same is taking place all over the universe? Information theory, the mathematical study of the quantification, storage and communication of information, may help us understand what's going on. Originally developed by mathematician Claude Shannon, it has become increasingly popular in physics and is used in a growing range of research areas. In a 2023 paper, I used information theory to propose my second law of infodynamics. This stipulates that information 'entropy', or the level of information disorganisation, will have to reduce or stay static within any given closed information system. This is the opposite of the popular second law of thermodynamics, which dictates that physical entropy, or disorder, always increases. Take a cooling cup of coffee. Energy flows from hot to cold until the temperature of the coffee is the same as the temperature of the room and its energy is minimum—a state called thermal equilibrium. The entropy of the system is a maximum at this point—with all the molecules maximally spread out, having the same energy. What that means is that the spread of energies per molecule in the liquid is reduced. If one considers the information content of each molecule based on its energy, then at the start, in the hot cup of coffee, the information entropy is maximum and at equilibrium the information entropy is minimum. That's because almost all molecules are at the same energy level, becoming identical characters in an informational message. So the spread of different energies available is reduced when there's thermal equilibrium. But if we consider just location rather than energy, then there's lots of information disorder when particles are distributed randomly in space—the information required to keep pace with them is considerable. When they consolidate themselves together under gravitational attraction, however, the way planets, stars and galaxies do, the information gets compacted and more manageable. In simulations, that's exactly what occurs when a system tries to function more efficiently. So, matter flowing under the influence of gravity need not be a result of a force at all. Perhaps it is a function of the way the universe compacts the information that it has to work with. Here, space is not continuous and smooth. Space is made up of tiny 'cells' of information, similar to pixels in a photo or squares on the screen of a computer game. In each cell is basic information about the universe—where, say, a particle is–and all are gathered together to make the fabric of the universe. If you place items within this space, the system gets more complex. But when all of those items come together to be one item instead of many, the information is simple again. The universe, under this view, tends to naturally seek to be in those states of minimal information entropy. The real kicker is that if you do the numbers, the entropic 'informational force' created by this tendency toward simplicity is exactly equivalent to Newton's law of gravitation, as shown in my paper. This theory builds on earlier studies of 'entropic gravity' but goes a step further. In connecting information dynamics with gravity, we are led to the interesting conclusion that the universe could be running on some kind of cosmic software. In an artificial universe, maximum-efficiency rules would be expected. Symmetries would be expected. Compression would be expected. And law–that is, gravity—would be expected to emerge from these computational rules. We may not yet have definitive evidence that we live in a simulation. But the deeper we look, the more our universe seems to behave like a computational process. Melvin M. Vopson is an associate professor of physics at the University of Portsmouth. This article is republished from The Conversation under a Creative Commons license. Read the original article.