logo
Did 8 physicists make the perfect cacio e pepe? I tested the research

Did 8 physicists make the perfect cacio e pepe? I tested the research

CNN09-05-2025

Few things are as impressive to dinner party guests as a perfectly executed à la minute pasta dish.
As a Roman-born foodie, I would argue that cacio e pepe, a Roman pasta recipe that is as delicious and simple as it is finicky, is a bold menu choice for the novice cook.
The name says it all: pasta, cacio (cheese) and pepe (pepper). That's it. That's the recipe.
'There is no margin for error with just pasta, Pecorino Romano cheese, and black pepper,' Michele Casadei Massari, CEO and executive chef of Lucciola Italian Restaurant in Manhattan, told CNN via email.
'The main challenge lies in achieving a stable emulsion: If the cheese is overheated or the starch-water balance is wrong, the sauce will separate,' Massari said.
When that happens, the pecorino forms a gloopy cheesy mess, sticking to everything but the noodles — sad and naked.
So, how can one reliably avoid unpleasant dinnertime disasters? Either by practicing a lot — and inevitably making mistakes in the process, as I have — or by using science to guide the way.
Eight Italian physicists collaborated to crack the code of a foolproof cacio e pepe recipe, studying the properties of cheese, starch and water at different temperatures to learn how to replicate a flawless dish every time.
The study was published April 29 in the journal Physics of Fluids.
The magic trick? Adding a precise amount of cornstarch relative to the overall quantity of cheese used, to keep the dreaded clumps at bay.
When I first heard about the method, the Roman cuisine purist in me was skeptical.
Too many times, I've read cheat sheets for the perfect carbonara dish in which the solution to perfect, non-scrambled-egg sauce is to add cream (please, do not do this). In the interest of objectivity, I had to test the recipe myself and talk to these fellow Italians behind the research.
In a conversation in Italian, three of the study authors shared that the research came from their frustration for one too many cacio e pepe dishes gone wrong. Thanks to the group's familiarity with the concept of 'liquid-liquid phase separation,' they knew how to investigate the problem scientifically.
'At some point, the eight of us were all at the Max Planck Institute in Dresden (Germany), some as PhD students, some as postdocs,' said study coauthor Daniel Maria Busiello, a statistical physicist at Italy's University of Padova. At that time, 'something we'd often do was cooking Italian recipes, not just for Italians, but for others as well.'
Making big batches of cacio e pepe to feed other hungry scientists turned out to be a near-impossible feat. 'There were problems with controlling the temperature of the sauce and of the noodles, causing these clumps,' Busiello said.
'I remember a time the dish came out inedible. Something clicked then,' said study coauthor Ivan Di Terlizzi, one of the group's top cooks, and a statistical physicist at the Max Planck Institute for the Physics of Complex Systems in Dresden.
Di Terlizzi approached fellow researcher Giacomo Bartolucci, now a biophysicist at the University of Barcelona in Spain, with a working theory about possible commonalities between the behavior of cacio e pepe sauce and that of aqueous solutions of proteins inside cells. Bartolucci had focused on phase separation and the aggregation of proteins for his doctoral research.
The scientists set out to understand whether the cheese and water in this recipe could be scientifically described as 'a system that undergoes phase separation at high temperatures,' in Busiello's words.
'We had a theoretical framework and a practical problem,' Busiello added.
Innovating on traditional Italian recipes, even on solid scientific grounds, is a high-risk endeavor. Given the touchy subject, the study authors are of course all Italians, by design.
'If we have to piss off a bunch of people, at least let it be eight Italians who did it,' Di Terlizzi said.
To investigate the behavior of cacio e pepe sauce under heat, the researchers conducted experiments recreating the cooking process in a controlled setup.
'We prepared small batches of sauce using precise amounts of cheese, starch, and water, and gradually heated them using a sous vide device to carefully control the temperature,' Di Terlizzi explained.
'At each stage, we took a small sample of the sauce, placed it in a petri dish and photographed it from above to observe how clumps formed. This allowed us to track how changes in temperature and ingredients affected the smoothness of the sauce.'
The researchers found that the concentration of starch in the sauce is the key factor influencing its stability. If the proportion of starch falls below 1% of the total cheese weight, the cheese will enter what the authors call the 'Mozzarella phase' — in which clumps are inescapable and the dish is ruined — at lower temperatures. A 2% to 3% starch-to-cheese ratio yields the best results.
In the recipe for two published in the study, the researchers used 5 grams (0.18 ounce) of cornstarch or potato starch dissolved in water, heated up gently to form a gel, then cooled down with more water before being blended with 200 grams (7 ounces) of pecorino cheese.
Sodium citrate, a common additive used to make smooth mac and cheese, also worked well in their recipe, although according to Di Terlizzi it lent the dish a slightly 'cheese single' aftertaste.
The scientifically engineered pecorino cream can withstand temperature changes better than the classic cheese-starchy water mixture made with pasta cooking water and can even be reheated.
Temperature is in fact another key factor that can make or break the sauce, which has to do with how the proteins in the cheese behave when heated.
'The sauce is stable if created at low temperatures and the starch bonded with the proteins. If, after that happens, you expose the sauce to high temperatures, proteins can no longer interact,' Di Terlizzi said.
If the emulsion of cheese and starch happens at high temperatures though, 'there's no guarantee that proteins will bond with other proteins, which causes the aggregation, before they bond with the starch.'
The scientifically optimized recipe will yield solid results for large batches of pasta, as the stable sauce will give you more flexibility dealing with a large volume of piping hot noodles, cooling down more slowly than a smaller batch of pasta would.
When trying the scientifically optimized recipe, don't throw away all the pasta cooking water! You'll still need some of it in the final mixing of all ingredients (the 'mantecatura' in Italian). Just be careful to let it cool down slightly.
When I tested the recipe, the instructions felt straightforward, and the process was quick, though it required a few more steps than what I'm used to (such as forming the starch gel on the stove).
It was frankly odd to work with a pecorino cream that felt so smooth — it almost reminded me of a jarred sauce. Nothing changes as far as the pepper goes. Just crack as much of it as you'd like and toast it in a pan to release its aroma.
My husband and I enjoyed the dish. It tasted great, and dealing with a pecorino cream that needs much less babying at the stove and allows for a much less time-sensitive mantecatura is a definite plus. Also, who doesn't love the idea of a cacio e pepe party for a crowd?
My only qualm is that knowing there was starch added in the sauce, my perception of the dish's mouthfeel was definitely skewed, but that wasn't the case for my husband.
As a proud Roman home cook, my pursuit of a perfect classic cacio e pepe will be a life-long experiment, but the scientifically optimized recipe exceeded my expectations and would be a good initiation for home cooks who have been put off by the dish's treachery thus far.
But what exactly makes the traditional way of making this dish, by emulsifying the cheese with pasta cooking water, so challenging?
It's practically impossible to know just how much starch is naturally present in the pasta cooking water, so the success of the crucial emulsion is more of a gamble, but there are tricks to mitigate catastrophe.
One, coming from Massari, is to use less water to boil the pasta.
'For cacio e pepe, I recommend using about 6 to 8 cups of water for 7 ounces of pasta, which translates to a ratio closer to 1 part pasta to 6–7 parts water by weight — significantly less than in traditional pasta cooking,' the chef said.
A classic rule of thumb for cooking pasta, according to Massari, is to use about 1 quart of water for every 3½ ounces of pasta, and 2 teaspoons of salt.
The researchers, who used an arbitrary 10-to-1 ratio of water to pasta in their experiments, also said that reducing that volume of water by two-thirds generally concentrated the natural pasta starch to a safe degree.
Massari also recommends emulsifying the cheese and pasta water below 60 degrees Celsius (140 degrees Fahrenheit), which is in line with the new study's findings on temperature.
'I first create a cold cheese cream, using finely grated Pecorino and a small amount of the pasta's starchy cooking water, blending until smooth. Then, I toss the pasta off the heat with the cream and freshly cracked black pepper, adding extra water to adjust the texture,' Massari said. 'The result should be a silky, cohesive sauce that clings beautifully to every strand of pasta, not a broken or heavy coating.'
'The natural starch from properly handled pasta is more than sufficient to achieve a creamy, stable, and authentic sauce without compromising the nutritional integrity of the dish,' Massari added.
Another chef's trick to buy you some time while attempting the recipe the traditional way is 'pasta regeneration' — which involves partially cooking the pasta for about 70% of its time, immediately shocking it in ice water to stop gelatinization and finishing it later before serving.
'This method preserves al dente texture while enhancing the final release of surface starch, which is crucial for stabilizing delicate emulsions,' Massari explained.
The researchers said their scientifically optimized recipe has surged to meme status, online and offline.
'Some social media users were hypercritical about the recipe we proposed, despite it being used before in prestigious restaurants,' Di Terlizzi told CNN. 'Overall, I can say that excitement prevailed, especially in the scientific community,' he added.
'We won't say we invented the definitive method,' Di Terlizzi said, but this method will save you from ruining good, expensive and hard-to-source pecorino cheese.
For the researchers, that's personal: 'We're in Germany. We have that shipped to us all the way from Italy. We can't just buy it at the store every day,' Di Terlizzi said. 'So, when the dish turns out badly, that bothers us.'
A big pasta dinner celebrated the publication of the paper, with the researchers preparing at least 4 pounds of pasta for a crowd.
'We were on pins and needles because our diners all knew about the experiment — but it worked perfectly,' Busiello said.
Bartolucci added, 'That was our trial by fire.'
This is CNN's summary of the recipe presented in the study. You can find Pecorino Romano DOP — which stands for Protected Designation of Origin, a certification of a product's origin and quality assigned by the Italian government and the European Union — and tonnarelli pasta at Italian grocers and online specialty stores.
Serves 2
Ingredients
• Salt
• 5 grams (2 teaspoons) cracked black peppercorns, plus more for serving
• 5 grams (2 teaspoons) cornstarch or potato starch
• 200 grams (1 ½ cups, firmly packed) pregrated Pecorino Romano DOP (such as Fulvi, Locatelli or Cello), plus more for serving
• 300 grams (10.6 ounces) pasta, preferably tonnarelli (spaghetti or rigatoni also work well)
Instructions
1. Bring a large pot of lightly salted water to a boil.
2. While the water comes to a boil, add peppercorns in a single layer to a dry pan over medium-low heat. Toast until fragrant, 1 to 2 minutes. Remove pepper from heat immediately.
3. Make the starch gel. In a small saucepan, dissolve the cornstarch by whisking it into 50 grams (3 tablespoons plus 1 teaspoon) cold water. Heat the mixture gently over low heat until it thickens and turns nearly clear. Remove the starch gel from heat and whisk in 100 grams (6 ¾ tablespoons) water to cool. The mixture will return to a liquid state.
4. Make the pecorino cream. Add the starchy water, grated cheese and peppercorns to the bowl of a food processor and pulse to combine until a smooth cream forms.
5. Cook the pasta according to the package directions until al dente, reserving 237 milliliters (1 cup) of pasta water before draining. Drain the pasta and let cool for up to 1 minute.
6. Mix the pasta with the sauce, ensuring even coating, and adjust the consistency by gradually adding pasta water as needed. Keep the sauce slightly runny as it tends to thicken as it cools. If needed, the dish can withstand gentle reheating (up to 80 or 90 C, 176 to 194 F) to reach serving temperature.
7. Sprinkle with additional grated cheese and pepper and serve immediately.
'I recommend using high-quality spaghettoni pasta made from durum wheat semolina, bronze-extruded and slow-dried at low temperatures,' Massari told CNN. 'I prefer using Matt, a heritage durum wheat cultivated mainly in Puglia and Sicily.'
You can find matt spaghettoni, Sarawak black peppercorns and Pecorino Romano DOP at online specialty stores.
Serves 2
Ingredients
• Sea salt, preferably coarse Sicilian sea salt
• 5 grams (1 teaspoon) freshly cracked Sarawak black peppercorns, plus more for serving
• 100 grams (1 cup, firmly packed) pregrated Pecorino Romano DOP (such as Fulvi, Locatelli or Cello), plus more for serving
• 200 grams (7 ounces) spaghettoni, preferably matt durum wheat spaghettoni
Directions
1. Bring 6 to 8 cups of lightly salted water to a boil in a large pot. Toast the black pepper in a dry pan over medium-low heat until fragrant. Remove pepper from heat immediately.
2. Cook the spaghettoni according to the package directions until slightly al dente, reserving 237 grams (1 cup) of cooking liquid while it's cooking.
3. While the pasta is cooking, prepare a cold emulsion by mixing grated Pecorino Romano with a small ladle of warm pasta water (ideally under 60 C, or 140 F) in a medium bowl until it forms a creamy base.
4. Drain the pasta slightly al dente, then transfer it to the pan with the pepper.
5. Toss the pasta with the Pecorino cream, gradually adjusting with more pasta water to create a glossy, smooth sauce that perfectly coats the noodles. Add more black pepper and cheese before serving. Serve immediately.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

A New Law of Nature Attempts to Explain the Complexity of the Universe
A New Law of Nature Attempts to Explain the Complexity of the Universe

WIRED

timean hour ago

  • WIRED

A New Law of Nature Attempts to Explain the Complexity of the Universe

Jun 8, 2025 7:00 AM A novel suggestion that complexity increases over time, not just in living organisms but in the nonliving world, promises to rewrite notions of time and evolution. Illustration: Irene Pérez for Quanta Magazine The original version of this story appeared in Quanta Magazine. In 1950 the Italian physicist Enrico Fermi was discussing the possibility of intelligent alien life with his colleagues. If alien civilizations exist, he said, some should surely have had enough time to expand throughout the cosmos. So where are they? Many answers to Fermi's 'paradox' have been proposed: Maybe alien civilizations burn out or destroy themselves before they can become interstellar wanderers. But perhaps the simplest answer is that such civilizations don't appear in the first place: Intelligent life is extremely unlikely, and we pose the question only because we are the supremely rare exception. A new proposal by an interdisciplinary team of researchers challenges that bleak conclusion. They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics—the law that dictates an inevitable rise in entropy, a measure of disorder. If they're right, complex and intelligent life should be widespread. In this new view, biological evolution appears not as a unique process that gave rise to a qualitatively distinct form of matter—living organisms. Instead, evolution is a special (and perhaps inevitable) case of a more general principle that governs the universe. According to this principle, entities are selected because they are richer in a kind of information that enables them to perform some kind of function. This hypothesis, formulated by the mineralogist Robert Hazen and the astrobiologist Michael Wong of the Carnegie Institution in Washington, DC, along with a team of others, has provoked intense debate. Some researchers have welcomed the idea as part of a grand narrative about fundamental laws of nature. They argue that the basic laws of physics are not 'complete' in the sense of supplying all we need to comprehend natural phenomena; rather, evolution—biological or otherwise—introduces functions and novelties that could not even in principle be predicted from physics alone. 'I'm so glad they've done what they've done,' said Stuart Kauffman, an emeritus complexity theorist at the University of Pennsylvania. 'They've made these questions legitimate.' Michael Wong, an astrobiologist at the Carnegie Institution in Washington, DC. Photograph: Katherine Cain/Carnegie Science Others argue that extending evolutionary ideas about function to non-living systems is an overreach. The quantitative value that measures information in this new approach is not only relative—it changes depending on context—it's impossible to calculate. For this and other reasons, critics have charged that the new theory cannot be tested, and therefore is of little use. The work taps into an expanding debate about how biological evolution fits within the normal framework of science. The theory of Darwinian evolution by natural selection helps us to understand how living things have changed in the past. But unlike most scientific theories, it can't predict much about what is to come. Might embedding it within a meta-law of increasing complexity let us glimpse what the future holds? Making Meaning The story begins in 2003, when the biologist Jack Szostak published a short article in Nature proposing the concept of functional information. Szostak—who six years later would get a Nobel Prize for unrelated work—wanted to quantify the amount of information or complexity that biological molecules like proteins or DNA strands embody. Classical information theory, developed by the telecommunications researcher Claude Shannon in the 1940s and later elaborated by the Russian mathematician Andrey Kolmogorov, offers one answer. Per Kolmogorov, the complexity of a string of symbols (such as binary 1s and 0s) depends on how concisely one can specify that sequence uniquely. For example, consider DNA, which is a chain of four different building blocks called nucleotides. Α strand composed only of one nucleotide, repeating again and again, has much less complexity—and, by extension, encodes less information—than one composed of all four nucleotides in which the sequence seems random (as is more typical in the genome). Jack Szostak proposed a way to quantify information in biological systems. Photograph: HHMI But Szostak pointed out that Kolmogorov's measure of complexity neglects an issue crucial to biology: how biological molecules function. In biology, sometimes many different molecules can do the same job. Consider RNA molecules, some of which have biochemical functions that can easily be defined and measured. (Like DNA, RNA is made up of sequences of nucleotides.) In particular, short strands of RNA called aptamers securely bind to other molecules. Let's say you want to find an RNA aptamer that binds to a particular target molecule. Can lots of aptamers do it, or just one? If only a single aptamer can do the job, then it's unique, just as a long, seemingly random sequence of letters is unique. Szostak said that this aptamer would have a lot of what he called 'functional information.' Illustration: Irene Pérez for Quanta Magazine If many different aptamers can perform the same task, the functional information is much smaller. So we can calculate the functional information of a molecule by asking how many other molecules of the same size can do the same task just as well. Szostak went on to show that in a case like this, functional information can be measured experimentally. He made a bunch of RNA aptamers and used chemical methods to identify and isolate the ones that would bind to a chosen target molecule. He then mutated the winners a little to seek even better binders and repeated the process. The better an aptamer gets at binding, the less likely it is that another RNA molecule chosen at random will do just as well: The functional information of the winners in each round should rise. Szostak found that the functional information of the best-performing aptamers got ever closer to the maximum value predicted theoretically. Selected for Function Hazen came across Szostak's idea while thinking about the origin of life—an issue that drew him in as a mineralogist, because chemical reactions taking place on minerals have long been suspected to have played a key role in getting life started. 'I concluded that talking about life versus nonlife is a false dichotomy,' Hazen said. 'I felt there had to be some kind of continuum—there has to be something that's driving this process from simpler to more complex systems.' Functional information, he thought, promised a way to get at the 'increasing complexity of all kinds of evolving systems.' In 2007 Hazen collaborated with Szostak to write a computer simulation involving algorithms that evolve via mutations. Their function, in this case, was not to bind to a target molecule, but to carry out computations. Again they found that the functional information increased spontaneously over time as the system evolved. There the idea languished for years. Hazen could not see how to take it any further until Wong accepted a fellowship at the Carnegie Institution in 2021. Wong had a background in planetary atmospheres, but he and Hazen discovered they were thinking about the same questions. 'From the very first moment that we sat down and talked about ideas, it was unbelievable,' Hazen said. Robert Hazen, a mineralogist at the Carnegie Institution in Washington, DC. Photograph: Courtesy of Robert Hazen 'I had got disillusioned with the state of the art of looking for life on other worlds,' Wong said. 'I thought it was too narrowly constrained to life as we know it here on Earth, but life elsewhere may take a completely different evolutionary trajectory. So how do we abstract far enough away from life on Earth that we'd be able to notice life elsewhere even if it had different chemical specifics, but not so far that we'd be including all kinds of self-organizing structures like hurricanes?' The pair soon realized that they needed expertise from a whole other set of disciplines. 'We needed people who came at this problem from very different points of view, so that we all had checks and balances on each other's prejudices,' Hazen said. 'This is not a mineralogical problem; it's not a physics problem, or a philosophical problem. It's all of those things.' They suspected that functional information was the key to understanding how complex systems like living organisms arise through evolutionary processes happening over time. 'We all assumed the second law of thermodynamics supplies the arrow of time,' Hazen said. 'But it seems like there's a much more idiosyncratic pathway that the universe takes. We think it's because of selection for function—a very orderly process that leads to ordered states. That's not part of the second law, although it's not inconsistent with it either.' Looked at this way, the concept of functional information allowed the team to think about the development of complex systems that don't seem related to life at all. At first glance, it doesn't seem a promising idea. In biology, function makes sense. But what does 'function' mean for a rock? All it really implies, Hazen said, is that some selective process favors one entity over lots of other potential combinations. A huge number of different minerals can form from silicon, oxygen, aluminum, calcium, and so on. But only a few are found in any given environment. The most stable minerals turn out to be the most common. But sometimes less stable minerals persist because there isn't enough energy available to convert them to more stable phases. 'Information itself might be a vital parameter of the cosmos, similar to mass, charge, and energy.' This might seem trivial, like saying that some objects exist while other ones don't, even if they could in theory. But Hazen and Wong have shown that, even for minerals, functional information has increased over the course of Earth's history. Minerals evolve toward greater complexity (though not in the Darwinian sense). Hazen and colleagues speculate that complex forms of carbon such as graphene might form in the hydrocarbon-rich environment of Saturn's moon Titan—another example of an increase in functional information that doesn't involve life. It's the same with chemical elements. The first moments after the Big Bang were filled with undifferentiated energy. As things cooled, quarks formed and then condensed into protons and neutrons. These gathered into the nuclei of hydrogen, helium, and lithium atoms. Only once stars formed and nuclear fusion happened within them did more complex elements like carbon and oxygen form. And only when some stars had exhausted their fusion fuel did their collapse and explosion in supernovas create heavier elements such as heavy metals. Steadily, the elements increased in nuclear complexity. Wong said their work implies three main conclusions. First, biology is just one example of evolution. 'There is a more universal description that drives the evolution of complex systems.' Illustration: Irene Pérez for Quanta Magazine Second, he said, there might be 'an arrow in time that describes this increasing complexity,' similar to the way the second law of thermodynamics, which describes the increase in entropy, is thought to create a preferred direction of time. Finally, Wong said, 'information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.' In the work Hazen and Szostak conducted on evolution using artificial-life algorithms, the increase in functional information was not always gradual. Sometimes it would happen in sudden jumps. That echoes what is seen in biological evolution. Biologists have long recognized transitions where the complexity of organisms increases abruptly. One such transition was the appearance of organisms with cellular nuclei (around 1.8 billion to 2.7 billion years ago). Then there was the transition to multicellular organisms (around 2 billion to 1.6 billion years ago), the abrupt diversification of body forms in the Cambrian explosion (540 million years ago), and the appearance of central nervous systems (around 600 million to 520 million years ago). The arrival of humans was arguably another major and rapid evolutionary transition. Evolutionary biologists have tended to view each of these transitions as a contingent event. But within the functional-information framework, it seems possible that such jumps in evolutionary processes (whether biological or not) are inevitable. In these jumps, Wong pictures the evolving objects as accessing an entirely new landscape of possibilities and ways to become organized, as if penetrating to the 'next floor up.' Crucially, what matters—the criteria for selection, on which continued evolution depends—also changes, plotting a wholly novel course. On the next floor up, possibilities await that could not have been guessed before you reached it. For example, during the origin of life it might initially have mattered that proto-biological molecules would persist for a long time—that they'd be stable. But once such molecules became organized into groups that could catalyze one another's formation—what Kauffman has called autocatalytic cycles—the molecules themselves could be short-lived, so long as the cycles persisted. Now it was dynamical, not thermodynamic, stability that mattered. Ricard Solé of the Santa Fe Institute thinks such jumps might be equivalent to phase transitions in physics, such as the freezing of water or the magnetization of iron: They are collective processes with universal features, and they mean that everything changes, everywhere, all at once. In other words, in this view there's a kind of physics of evolution—and it's a kind of physics we know about already. The Biosphere Creates Its Own Possibilities The tricky thing about functional information is that, unlike a measure such as size or mass, it is contextual: It depends on what we want the object to do, and what environment it is in. For instance, the functional information for an RNA aptamer binding to a particular molecule will generally be quite different from the information for binding to a different molecule. Yet finding new uses for existing components is precisely what evolution does. Feathers did not evolve for flight, for example. This repurposing reflects how biological evolution is jerry-rigged, making use of what's available. Kauffman argues that biological evolution is thus constantly creating not just new types of organisms but new possibilities for organisms, ones that not only did not exist at an earlier stage of evolution but could not possibly have existed. From the soup of single-celled organisms that constituted life on Earth 3 billion years ago, no elephant could have suddenly emerged—this required a whole host of preceding, contingent but specific innovations. However, there is no theoretical limit to the number of uses an object has. This means that the appearance of new functions in evolution can't be predicted—and yet some new functions can dictate the very rules of how the system evolves subsequently. 'The biosphere is creating its own possibilities,' Kauffman said. 'Not only do we not know what will happen, we don't even know what can happen.' Photosynthesis was such a profound development; so were eukaryotes, nervous systems and language. As the microbiologist Carl Woese and the physicist Nigel Goldenfeld put it in 2011, 'We need an additional set of rules describing the evolution of the original rules. But this upper level of rules itself needs to evolve. Thus, we end up with an infinite hierarchy.' The physicist Paul Davies of Arizona State University agrees that biological evolution 'generates its own extended possibility space which cannot be reliably predicted or captured via any deterministic process from prior states. So life evolves partly into the unknown.' 'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms.' Mathematically, a 'phase space' is a way of describing all possible configurations of a physical system, whether it's as comparatively simple as an idealized pendulum or as complicated as all the atoms comprising the Earth. Davies and his co-workers have recently suggested that evolution in an expanding accessible phase space might be formally equivalent to the 'incompleteness theorems' devised by the mathematician Kurt Gödel. Gödel showed that any system of axioms in mathematics permits the formulation of statements that can't be shown to be true or false. We can only decide such statements by adding new axioms. Davies and colleagues say that, as with Gödel's theorem, the key factor that makes biological evolution open-ended and prevents us from being able to express it in a self-contained and all-encompassing phase space is that it is self-referential: The appearance of new actors in the space feeds back on those already there to create new possibilities for action. This isn't the case for physical systems, which, even if they have, say, millions of stars in a galaxy, are not self-referential. 'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms,' said Marcus Heisler, a plant developmental biologist at the University of Sydney and co-author of the incompleteness paper. This connection between biological evolution and the issue of noncomputability, Davies said, 'goes right to the heart of what makes life so magical.' Is biology special, then, among evolutionary processes in having an open-endedness generated by self-reference? Hazen thinks that in fact once complex cognition is added to the mix—once the components of the system can reason, choose, and run experiments 'in their heads'—the potential for macro-micro feedback and open-ended growth is even greater. 'Technological applications take us way beyond Darwinism,' he said. A watch gets made faster if the watchmaker is not blind. Back to the Bench If Hazen and colleagues are right that evolution involving any kind of selection inevitably increases functional information—in effect, complexity—does this mean that life itself, and perhaps consciousness and higher intelligence, is inevitable in the universe? That would run counter to what some biologists have thought. The eminent evolutionary biologist Ernst Mayr believed that the search for extraterrestrial intelligence was doomed because the appearance of humanlike intelligence is 'utterly improbable.' After all, he said, if intelligence at a level that leads to cultures and civilizations were so adaptively useful in Darwinian evolution, how come it only arose once across the entire tree of life? Mayr's evolutionary point possibly vanishes in the jump to humanlike complexity and intelligence, whereupon the whole playing field is utterly transformed. Humans attained planetary dominance so rapidly (for better or worse) that the question of when it will happen again becomes moot. Illustration: Irene Pérez for Quanta Magazine But what about the chances of such a jump happening in the first place? If the new 'law of increasing functional information' is right, it looks as though life, once it exists, is bound to get more complex by leaps and bounds. It doesn't have to rely on some highly improbable chance event. What's more, such an increase in complexity seems to imply the appearance of new causal laws in nature that, while not incompatible with the fundamental laws of physics governing the smallest component parts, effectively take over from them in determining what happens next. Arguably we see this already in biology: Galileo's (apocryphal) experiment of dropping two masses from the Leaning Tower of Pisa no longer has predictive power when the masses are not cannonballs but living birds. Together with the chemist Lee Cronin of the University of Glasgow, Sara Walker of Arizona State University has devised an alternative set of ideas to describe how complexity arises, called assembly theory. In place of functional information, assembly theory relies on a number called the assembly index, which measures the minimum number of steps required to make an object from its constituent ingredients. 'Laws for living systems must be somewhat different than what we have in physics now,' Walker said, 'but that does not mean that there are no laws.' But she doubts that the putative law of functional information can be rigorously tested in the lab. 'I am not sure how one could say [the theory] is right or wrong, since there is no way to test it objectively,' she said. 'What would the experiment look for? How would it be controlled? I would love to see an example, but I remain skeptical until some metrology is done in this area.' Hazen acknowledges that, for most physical objects, it is impossible to calculate functional information even in principle. Even for a single living cell, he admits, there's no way of quantifying it. But he argues that this is not a sticking point, because we can still understand it conceptually and get an approximate quantitative sense of it. Similarly, we can't calculate the exact dynamics of the asteroid belt because the gravitational problem is too complicated—but we can still describe it approximately enough to navigate spacecraft through it. Wong sees a potential application of their ideas in astrobiology. One of the curious aspects of living organisms on Earth is that they tend to make a far smaller subset of organic molecules than they could make given the basic ingredients. That's because natural selection has picked out some favored compounds. There's much more glucose in living cells, for example, than you'd expect if molecules were simply being made either randomly or according to their thermodynamic stability. So one potential signature of lifelike entities on other worlds might be similar signs of selection outside what chemical thermodynamics or kinetics alone would generate. (Assembly theory similarly predicts complexity-based biosignatures.) There might be other ways of putting the ideas to the test. Wong said there is more work still to be done on mineral evolution, and they hope to look at nucleosynthesis and computational 'artificial life.' Hazen also sees possible applications in oncology, soil science and language evolution. For example, the evolutionary biologist Frédéric Thomas of the University of Montpellier in France and colleagues have argued that the selective principles governing the way cancer cells change over time in tumors are not like those of Darwinian evolution, in which the selection criterion is fitness, but more closely resemble the idea of selection for function from Hazen and colleagues. Hazen's team has been fielding queries from researchers ranging from economists to neuroscientists, who are keen to see if the approach can help. 'People are approaching us because they are desperate to find a model to explain their system,' Hazen said. But whether or not functional information turns out to be the right tool for thinking about these questions, many researchers seem to be converging on similar questions about complexity, information, evolution (both biological and cosmic), function and purpose, and the directionality of time. It's hard not to suspect that something big is afoot. There are echoes of the early days of thermodynamics, which began with humble questions about how machines work and ended up speaking to the arrow of time, the peculiarities of living matter, and the fate of the universe. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

UK firm achieves tritium breakthrough, could boost nuclear fusion fuel supply
UK firm achieves tritium breakthrough, could boost nuclear fusion fuel supply

Yahoo

time3 hours ago

  • Yahoo

UK firm achieves tritium breakthrough, could boost nuclear fusion fuel supply

Astral Systems, a UK-based private commercial fusion company, has claimed to have become the first firm to successfully breed tritium, a vital fusion fuel, using its own operational fusion reactor. This achievement, made with the University of Bristol, addresses a significant hurdle in the development of fusion energy. The milestone came during a 55-hour Deuterium-Deuterium (DD) fusion irradiation campaign conducted in March. Scientists from Astral Systems and the University of Bristol produced and detected tritium in real-time from an experimental lithium breeder blanket within Astral's multi-state fusion reactors. "There's a global race to find new ways to develop more tritium than what exists in today's world – a huge barrier is bringing fusion energy to reality," said Talmon Firestone, CEO and co-founder of Astral Systems. "This collaboration with the University of Bristol marks a leap forward in the search for viable, greater-than-replacement tritium breeding technologies. Using our multi-state fusion technology, we are the first private fusion company to use our reactors as a neutron source to produce fusion fuel." Astral Systems' approach uses its Multi-State Fusion (MSF) technology. The company states this will commercialize fusion power with better performance, efficiency, and lower costs than traditional reactors. Their reactor design, the result of 25 years of engineering and over 15 years of runtime, incorporates recent understandings of stellar physics. A core innovation is lattice confinement fusion (LCF), a concept first discovered by NASA in 2020. This allows Astral's reactor to achieve solid-state fuel densities 400 million times higher than those in plasma. The company's reactors are designed to induce two distinct fusion reactions simultaneously from a single power input, with fusion occurring in both plasma and a solid-state lattice. The reactor core also features an electron-screened environment. This design reduces the energy needed to overcome the Coulomb barrier between particles, which lowers required fusion temperatures by several million degrees and allows for higher performance in a compact size. The ability to generate tritium within the reactor is crucial. A sustainable fusion energy system needs to produce more fuel than it consumes. This development shows a path toward solving that engineering challenge. In this regard, the latest breakthrough has broad and major implications for several industries, including nuclear fusion energy. 'As we progress the fusion rate of our technology, aiming to exceed 10 trillion DT fusions per second per system, we unlock a wide range of applications and capabilities, such as large-scale medical isotope production, fusion neutron materials damage testing, transmutation of existing nuclear waste stores, space applications, hybrid fusion-fission power systems, and beyond,' remarked the company. Professor Tom Scott, who led the University of Bristol's team, supported by the Royal Academy of Engineering and UK Atomic Energy Authority, concluded, "We're now pushing to quickly optimise our system to further enhance our tritium breeding capability." 'This landmark moment clearly demonstrates a potential path to scalable tritium production in the future and the capability of Multi-State Fusion to produce isotopes in general.'

5 lessons on finding truth in an uncertain world
5 lessons on finding truth in an uncertain world

Fast Company

time4 hours ago

  • Fast Company

5 lessons on finding truth in an uncertain world

Adam Kucharski is a professor of epidemiology at the London School of Hygiene & Tropical Medicine and an award-winning science writer. His book, The Rules of Contagion, was a Book of the Year in The Times, Guardian, and Financial Times. A mathematician by training, his work on global outbreaks has included Ebola, Zika, and COVID. He has advised multiple governments and health agencies. His writing has appeared in Wired, Observer, and Financial Times, among other outlets, and he has contributed to several documentaries, including BBC's Horizon. What's the big idea? In all arenas of life, there is an endless hunt to find certainty and establish proof. We don't always have the luxury of 'being sure,' and many situations demand decisions be made even when there is insufficient evidence to choose confidently. Every field—from mathematics and tech to law and medicine—has its own methods for proving truth, and what to do when it is out of reach. Professionally and personally, it is important to understand what constitutes proof and how to proceed when facts falter. Below, Adam shares five key insights from his new book, Proof: The Art and Science of Certainty. Listen to the audio version—read by Adam himself—in the Next Big Idea App. 1. It is dangerous to assume something is self-evident. In the first draft of the U.S. Declaration of Independence, the Founding Fathers wrote that 'we hold these truths to be sacred and undeniable, that all men are created equal.' But shortly before it was finalized, Benjamin Franklin crossed out the words 'sacred and undeniable,' because they implied divine authority. Instead, he replaced them with the famous line, 'We hold these truths to be self-evident.' The term 'self-evident' was borrowed from mathematics—specifically from Greek geometry. The idea was that there could be a universal truth about equality on which a society could be built. This idea of self-evident, universal truths had shaped mathematics for millennia. But the assumption ended up causing a lot of problems, both in politics and mathematics. In the 19th century, mathematicians started to notice that certain theorems that had been declared 'intuitively obvious' didn't hold up when we considered things that were infinitely large or infinitely small. It seemed 'self-evident' didn't always mean well-evidenced. Meanwhile, in the U.S., supporters of slavery were denying what Abraham Lincoln called the national axioms of equality. In the 1850s, Lincoln (himself a keen amateur mathematician) increasingly came to think of equality as a proposition rather than a self-evident truth. It was something that would need to be proven together as a country. Similarly, mathematicians during this period would move away from assumptions that things were obvious and instead work to find sturdier ground. 2. In practice, proof means balancing too much belief and too much skepticism. If we want to get closer to the truth, there are two errors we must avoid: we don't want to believe things that are false, and we don't want to discount things that are true. It's a challenge that comes up throughout life. But where should we set the bar for evidence? If we're overly skeptical and set it too high, we'll ignore valid claims. But if we set the bar too low, we'll end up accepting many things that aren't true. In the 1760s, the English legal scholar William Blackstone argued that we should work particularly hard to avoid wrongful convictions. As he put it: 'It is better that ten guilty persons escape than that one innocent suffer.' Benjamin Franklin would later be even more cautious. He suggested that 'it is better 100 guilty persons should escape than that one innocent person should suffer.' 'We don't want to believe things that are false, and we don't want to discount things that are true.' But not all societies have agreed with this balance. Some communist regimes in the 20th century declared it better to kill a hundred innocent people than let one truly guilty person walk free. Science and medicine have also developed their own traditions around setting the bar for evidence. Clinical trials are typically designed in a way that penalizes a false positive four times more than a false negative. In other words, we don't want to say a treatment doesn't work when it does, but we really don't want to conclude it works when it doesn't. This ability to converge on a shared reality, even if occasionally flawed, is fundamental for science and medicine. It's also an essential component of democracy and justice. Rather than embracing or shunning everything we see, we must find ways to balance the risk that comes with trusting something to be true. 3. Life is full of 'weak evidence' problems. Science is dedicated to generating results that we can have high confidence in. But often in life, we must make choices without the luxury of extremely strong evidence. We can't, as some early statisticians did, simply remain on the fence if we're not confident either way. Whether we're sitting on a jury or in a boardroom, we face situations where a decision must be made regardless. This is known as the 'weak evidence' problem. For example, it might be very unlikely that a death is just a coincidence. But it also might be very unlikely that a certain person is a murderer. Legal cases are often decided on the basis that weak evidence in favor of the prosecution is more convincing than weak evidence for the defendant. Unfortunately, it can be easy to misinterpret weak evidence. A prominent example is the prosecutor's fallacy. This is a situation where people assume that if it's very unlikely a particular set of events occurred purely by coincidence, that must mean the defendant is very unlikely to be innocent. But to work out the probability of innocence, we can't just focus on the chances of a coincidence. What really matters is whether a guilty explanation is more likely than an innocent one. To navigate law—and life—we must often choose between unlikely explanations, rather than waiting for certainty. 4. Predictions are easier than taking action. If we spot a pattern in data, it can help us make predictions. If ice cream sales increase next month, it's reasonable to predict that heatstroke cases will too. These kinds of patterns can be useful if we want to make predictions, but they're less useful if we want to intervene in some way. The correlation in the data doesn't mean that ice cream causes heatstroke, and crucially, it doesn't tell us how to prevent further illness. 'Often in life, prediction isn't what we really care about.' In science, many problems are framed as prediction tasks because, fundamentally, it's easier than untangling cause and effect. In the field of social psychology, researchers use data to try to predict relationship outcomes. In the world of justice, courts use algorithms to predict whether someone will reoffend. But often in life, prediction isn't what we really care about. Whether we're talking about relationships or crimes, we don't just want to know what is likely to happen—we want to know why it happened and what we can do about it. In short, we need to get at the causes of what we're seeing, rather than settling for predictions. 5. Technology is changing our concept of proof. In 1976, two mathematicians announced the first-ever computer-aided proof. Their discovery meant that, for the first time in history, the mathematical community had to accept a major theorem that they could not verify by hand. However, not everyone initially believed the proof. Maybe the computer had made an error somewhere? Suddenly, mathematicians no longer had total intellectual control; they had to trust a machine. But then something curious happened. While older researchers had been skeptical, younger mathematicians took the opposite view. Why would they trust hundreds of pages of handwritten and hand-checked calculations? Surely a computer would be more accurate, right? Technology is challenging how we view science and proof. In 2024, we saw the AI algorithm AlphaFold make a Nobel Prize-winning discovery in biology. AlphaFold can predict protein structures and their interactions in a way that humans would never have been able to. But these predictions don't necessarily come with traditional biological understanding. Among many scientists, I've noticed a sense of loss when it comes to AI. For people trained in theory and explanation, crunching possibilities with a machine doesn't feel like familiar science. It may even feel like cheating or a placeholder for a better, neater solution that we've yet to find. And yet, there is also an acceptance that this is a valuable new route to knowledge, and the fresh ideas and discoveries it can bring.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store