logo
An Important Force of the Universe Appears to Be Changing, Scientists Find

An Important Force of the Universe Appears to Be Changing, Scientists Find

Yahoo23-03-2025

Dark energy, the mysterious force that makes up 70 percent of everything in existence, was hypothesized to explain why the expansion of the universe was accelerating. Ever since, it's been thought of as a constant, immutable presence.
Now, the latest observations from the Dark Energy Spectroscopic Instrument (DESI) indicate that dark energy has actually changed over time — a development that could potentially upend the prevailing cosmological model, and perhaps hint at a new understanding of physics. The findings, detailed in a series of papers currently awaiting peer review, have implications not only for how the universe has evolved, but what its eventual fate might be as well.
"It sounds like it will be a paradigm shift, something that will change our understanding and the way we are putting all the pieces together," Mustapha Ishak-Boushaki, a cosmologist at the University of Texas and DESI team member, told Quanta Magazine of the findings.
The DESI telescope, located in Kitt Peak, Arizona, searches and measures galaxies to tease out the effects of dark energy. It's now surveyed a staggering 15 million of these realms as far as 11 billion light years away, providing the most comprehensive portrait to date of the galaxies as they shifted and clustered together over the eons — movements thought to betray the presence of dark energy.
Following up on preliminary findings shared a year ago, the latest DESI results strongly suggest that the acceleration of the universe's expansion started sooner than once thought, peaked early on, and is currently weakening.
This is a big deal. Dark energy, as it's currently theorized, stems from the idea of a cosmological constant. Proposed by Albert Einstein, it assumes that there's some unseen background force powerful enough to explain why the universe, with all its mass, doesn't collapse under its own gravity.
Einstein later called the cosmological constant his "biggest blunder," but it found a second life decades later with the idea of dark energy, along with dark matter, in the late 1990s. Dark energy, envisioned as this constant, is now a cornerstone of the lambda-CDM model, the standard model of cosmology.
In this model, dark energy pushes against the literal weight of existence to make sure it all doesn't come crashing down, accelerating the universe's expansion at a fixed rate. Meanwhile, invisible dark matter, thought to make up roughly 25 percent of the universe compared to a measly five percent of the regular matter we're made out of, is thought to govern the formation of galaxies from the shadows with the pull of its gravity.
Though it may be the standard theory, lambda-CDM has always been contentious, not least of all because it doesn't explain what dark energy actually is (Einstein thought it was a force intrinsic to the vacuum of space itself.)
It's too early to say that the dominant model has been trumped, but it's on the ropes. The DESI results, combined with extensive observations of the cosmic microwave background — the leftover light from the Big Bang — and of thousands of supernovas, indicate a discrepancy of 4.2 sigma, a measurement of uncertainty indicating, in this case, that there's only a one in 30,000 chance that the lambda-CDM model is correct, per Quanta Mag. Five sigma, however, is the standard needed to be considered a bona fide discovery. Though it hasn't quite made the cut yet, the latest work yields a higher sigma level than reported a year ago — and there's still two more years of DESI data to parse through.
Comfortingly, one possible implication of a waning dark energy is that the universe won't relentlessly expand until it rips itself apart, as one theory holds. On the other hand, if dark energy's powers are diminishing, it's possible its effects could not halt at zero but go on to reverse course, dooming the cosmos to implode on itself. Then again, the mere fact that dark energy can change at all could mean that everything's up in the air.
"As far as theoretical models, Pandora's box just opened," Ishak-Boushaki told New Scientist. "We were stuck with a cosmological constant. We are not stuck anymore."
More on cosmology: Physicists Find That the Universe Could "Collapse Like a House of Cards"

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Monster black hole M87 is spinning at 80% of the cosmic speed limit — and pulling in matter even faster
Monster black hole M87 is spinning at 80% of the cosmic speed limit — and pulling in matter even faster

Yahoo

time21 hours ago

  • Yahoo

Monster black hole M87 is spinning at 80% of the cosmic speed limit — and pulling in matter even faster

When you buy through links on our articles, Future and its syndication partners may earn a commission. The monster black hole lurking at the center of galaxy M87 is an absolute beast. It is one of the largest in our vicinity and was the ideal first target for the Event Horizon Telescope. Scientists have taken a fresh look at the supermassive black hole using those iconic Event Horizon Telescope images and have now figured out just how fast this monster is spinning and how much material it's devouring. The results are pretty mind-blowing. This black hole, which weighs in at 6.5 billion times the mass of our Sun, is spinning at roughly 80% of the theoretical maximum speed possible in the universe. To put that in perspective, the inner edge of its accretion disk is whipping around at about 14% the speed of light - that's around 42 million meters per second. The team figured this out by studying the "bright spot" in the original black hole images. That asymmetric glow isn't just there for show - it's caused by something called relativistic Doppler beaming. The material on one side of the disk is moving toward us so fast that it appears much brighter than the material moving away from us. By measuring this brightness difference, the scientists could calculate the rotation speed. But here's where it gets really interesting. The researchers also looked at the magnetic field patterns around the black hole, which act like a roadmap for how material spirals inward. They discovered that matter is falling into the black hole at about 70 million meters per second - roughly 23% the speed of light. Using these measurements, they estimated that M87's black hole is consuming somewhere between 0.00004 to 0.4 solar masses worth of material every year. That might sound like a lot, but it's actually pretty modest for such a massive black hole - it's operating well below what scientists call the "Eddington limit," meaning it's in a relatively quiet phase. Related: Scientists just proved that 'monster' black hole M87 is spinning — confirming Einstein's relativity yet again Perhaps most importantly, the energy from all this in-falling material appears to perfectly match the power output of M87's famous jet - that spectacular beam of particles shooting out at near light-speed that extends for thousands of light-years. This supports the idea that these powerful jets are indeed powered by the black hole's feeding process. RELATED STORIES —Time-lapse of 1st black hole ever imaged reveals how matter swirls around it —Astronomers discover black hole ripping a star apart inside a galactic collision. 'It is a peculiar event' —Not 'Little Red Dots' or roaring quasars: James Webb telescope uncovers new kind of 'hidden' black hole never seen before The study represents a major step forward in understanding how supermassive black holes work. While previous estimates of M87's spin ranged anywhere from 0.1 to 0.98, this new method suggests it's definitely on the high end - at least 0.8 and possibly much closer to the theoretical maximum of 0.998. As we gear up for even more powerful telescopes and imaging techniques, M87's black hole will likely remain a cosmic laboratory for testing our understanding of gravity, spacetime, and the most extreme physics in the universe. Each new measurement brings us closer to answering fundamental questions about how these cosmic monsters shape entire galaxies and maybe even how they'll influence the ultimate fate of the cosmos itself. The original version of this article was published on Universe Today.

A New Law of Nature Attempts to Explain the Complexity of the Universe
A New Law of Nature Attempts to Explain the Complexity of the Universe

WIRED

timea day ago

  • WIRED

A New Law of Nature Attempts to Explain the Complexity of the Universe

Jun 8, 2025 7:00 AM A novel suggestion that complexity increases over time, not just in living organisms but in the nonliving world, promises to rewrite notions of time and evolution. Illustration: Irene Pérez for Quanta Magazine The original version of this story appeared in Quanta Magazine. In 1950 the Italian physicist Enrico Fermi was discussing the possibility of intelligent alien life with his colleagues. If alien civilizations exist, he said, some should surely have had enough time to expand throughout the cosmos. So where are they? Many answers to Fermi's 'paradox' have been proposed: Maybe alien civilizations burn out or destroy themselves before they can become interstellar wanderers. But perhaps the simplest answer is that such civilizations don't appear in the first place: Intelligent life is extremely unlikely, and we pose the question only because we are the supremely rare exception. A new proposal by an interdisciplinary team of researchers challenges that bleak conclusion. They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics—the law that dictates an inevitable rise in entropy, a measure of disorder. If they're right, complex and intelligent life should be widespread. In this new view, biological evolution appears not as a unique process that gave rise to a qualitatively distinct form of matter—living organisms. Instead, evolution is a special (and perhaps inevitable) case of a more general principle that governs the universe. According to this principle, entities are selected because they are richer in a kind of information that enables them to perform some kind of function. This hypothesis, formulated by the mineralogist Robert Hazen and the astrobiologist Michael Wong of the Carnegie Institution in Washington, DC, along with a team of others, has provoked intense debate. Some researchers have welcomed the idea as part of a grand narrative about fundamental laws of nature. They argue that the basic laws of physics are not 'complete' in the sense of supplying all we need to comprehend natural phenomena; rather, evolution—biological or otherwise—introduces functions and novelties that could not even in principle be predicted from physics alone. 'I'm so glad they've done what they've done,' said Stuart Kauffman, an emeritus complexity theorist at the University of Pennsylvania. 'They've made these questions legitimate.' Michael Wong, an astrobiologist at the Carnegie Institution in Washington, DC. Photograph: Katherine Cain/Carnegie Science Others argue that extending evolutionary ideas about function to non-living systems is an overreach. The quantitative value that measures information in this new approach is not only relative—it changes depending on context—it's impossible to calculate. For this and other reasons, critics have charged that the new theory cannot be tested, and therefore is of little use. The work taps into an expanding debate about how biological evolution fits within the normal framework of science. The theory of Darwinian evolution by natural selection helps us to understand how living things have changed in the past. But unlike most scientific theories, it can't predict much about what is to come. Might embedding it within a meta-law of increasing complexity let us glimpse what the future holds? Making Meaning The story begins in 2003, when the biologist Jack Szostak published a short article in Nature proposing the concept of functional information. Szostak—who six years later would get a Nobel Prize for unrelated work—wanted to quantify the amount of information or complexity that biological molecules like proteins or DNA strands embody. Classical information theory, developed by the telecommunications researcher Claude Shannon in the 1940s and later elaborated by the Russian mathematician Andrey Kolmogorov, offers one answer. Per Kolmogorov, the complexity of a string of symbols (such as binary 1s and 0s) depends on how concisely one can specify that sequence uniquely. For example, consider DNA, which is a chain of four different building blocks called nucleotides. Α strand composed only of one nucleotide, repeating again and again, has much less complexity—and, by extension, encodes less information—than one composed of all four nucleotides in which the sequence seems random (as is more typical in the genome). Jack Szostak proposed a way to quantify information in biological systems. Photograph: HHMI But Szostak pointed out that Kolmogorov's measure of complexity neglects an issue crucial to biology: how biological molecules function. In biology, sometimes many different molecules can do the same job. Consider RNA molecules, some of which have biochemical functions that can easily be defined and measured. (Like DNA, RNA is made up of sequences of nucleotides.) In particular, short strands of RNA called aptamers securely bind to other molecules. Let's say you want to find an RNA aptamer that binds to a particular target molecule. Can lots of aptamers do it, or just one? If only a single aptamer can do the job, then it's unique, just as a long, seemingly random sequence of letters is unique. Szostak said that this aptamer would have a lot of what he called 'functional information.' Illustration: Irene Pérez for Quanta Magazine If many different aptamers can perform the same task, the functional information is much smaller. So we can calculate the functional information of a molecule by asking how many other molecules of the same size can do the same task just as well. Szostak went on to show that in a case like this, functional information can be measured experimentally. He made a bunch of RNA aptamers and used chemical methods to identify and isolate the ones that would bind to a chosen target molecule. He then mutated the winners a little to seek even better binders and repeated the process. The better an aptamer gets at binding, the less likely it is that another RNA molecule chosen at random will do just as well: The functional information of the winners in each round should rise. Szostak found that the functional information of the best-performing aptamers got ever closer to the maximum value predicted theoretically. Selected for Function Hazen came across Szostak's idea while thinking about the origin of life—an issue that drew him in as a mineralogist, because chemical reactions taking place on minerals have long been suspected to have played a key role in getting life started. 'I concluded that talking about life versus nonlife is a false dichotomy,' Hazen said. 'I felt there had to be some kind of continuum—there has to be something that's driving this process from simpler to more complex systems.' Functional information, he thought, promised a way to get at the 'increasing complexity of all kinds of evolving systems.' In 2007 Hazen collaborated with Szostak to write a computer simulation involving algorithms that evolve via mutations. Their function, in this case, was not to bind to a target molecule, but to carry out computations. Again they found that the functional information increased spontaneously over time as the system evolved. There the idea languished for years. Hazen could not see how to take it any further until Wong accepted a fellowship at the Carnegie Institution in 2021. Wong had a background in planetary atmospheres, but he and Hazen discovered they were thinking about the same questions. 'From the very first moment that we sat down and talked about ideas, it was unbelievable,' Hazen said. Robert Hazen, a mineralogist at the Carnegie Institution in Washington, DC. Photograph: Courtesy of Robert Hazen 'I had got disillusioned with the state of the art of looking for life on other worlds,' Wong said. 'I thought it was too narrowly constrained to life as we know it here on Earth, but life elsewhere may take a completely different evolutionary trajectory. So how do we abstract far enough away from life on Earth that we'd be able to notice life elsewhere even if it had different chemical specifics, but not so far that we'd be including all kinds of self-organizing structures like hurricanes?' The pair soon realized that they needed expertise from a whole other set of disciplines. 'We needed people who came at this problem from very different points of view, so that we all had checks and balances on each other's prejudices,' Hazen said. 'This is not a mineralogical problem; it's not a physics problem, or a philosophical problem. It's all of those things.' They suspected that functional information was the key to understanding how complex systems like living organisms arise through evolutionary processes happening over time. 'We all assumed the second law of thermodynamics supplies the arrow of time,' Hazen said. 'But it seems like there's a much more idiosyncratic pathway that the universe takes. We think it's because of selection for function—a very orderly process that leads to ordered states. That's not part of the second law, although it's not inconsistent with it either.' Looked at this way, the concept of functional information allowed the team to think about the development of complex systems that don't seem related to life at all. At first glance, it doesn't seem a promising idea. In biology, function makes sense. But what does 'function' mean for a rock? All it really implies, Hazen said, is that some selective process favors one entity over lots of other potential combinations. A huge number of different minerals can form from silicon, oxygen, aluminum, calcium, and so on. But only a few are found in any given environment. The most stable minerals turn out to be the most common. But sometimes less stable minerals persist because there isn't enough energy available to convert them to more stable phases. 'Information itself might be a vital parameter of the cosmos, similar to mass, charge, and energy.' This might seem trivial, like saying that some objects exist while other ones don't, even if they could in theory. But Hazen and Wong have shown that, even for minerals, functional information has increased over the course of Earth's history. Minerals evolve toward greater complexity (though not in the Darwinian sense). Hazen and colleagues speculate that complex forms of carbon such as graphene might form in the hydrocarbon-rich environment of Saturn's moon Titan—another example of an increase in functional information that doesn't involve life. It's the same with chemical elements. The first moments after the Big Bang were filled with undifferentiated energy. As things cooled, quarks formed and then condensed into protons and neutrons. These gathered into the nuclei of hydrogen, helium, and lithium atoms. Only once stars formed and nuclear fusion happened within them did more complex elements like carbon and oxygen form. And only when some stars had exhausted their fusion fuel did their collapse and explosion in supernovas create heavier elements such as heavy metals. Steadily, the elements increased in nuclear complexity. Wong said their work implies three main conclusions. First, biology is just one example of evolution. 'There is a more universal description that drives the evolution of complex systems.' Illustration: Irene Pérez for Quanta Magazine Second, he said, there might be 'an arrow in time that describes this increasing complexity,' similar to the way the second law of thermodynamics, which describes the increase in entropy, is thought to create a preferred direction of time. Finally, Wong said, 'information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.' In the work Hazen and Szostak conducted on evolution using artificial-life algorithms, the increase in functional information was not always gradual. Sometimes it would happen in sudden jumps. That echoes what is seen in biological evolution. Biologists have long recognized transitions where the complexity of organisms increases abruptly. One such transition was the appearance of organisms with cellular nuclei (around 1.8 billion to 2.7 billion years ago). Then there was the transition to multicellular organisms (around 2 billion to 1.6 billion years ago), the abrupt diversification of body forms in the Cambrian explosion (540 million years ago), and the appearance of central nervous systems (around 600 million to 520 million years ago). The arrival of humans was arguably another major and rapid evolutionary transition. Evolutionary biologists have tended to view each of these transitions as a contingent event. But within the functional-information framework, it seems possible that such jumps in evolutionary processes (whether biological or not) are inevitable. In these jumps, Wong pictures the evolving objects as accessing an entirely new landscape of possibilities and ways to become organized, as if penetrating to the 'next floor up.' Crucially, what matters—the criteria for selection, on which continued evolution depends—also changes, plotting a wholly novel course. On the next floor up, possibilities await that could not have been guessed before you reached it. For example, during the origin of life it might initially have mattered that proto-biological molecules would persist for a long time—that they'd be stable. But once such molecules became organized into groups that could catalyze one another's formation—what Kauffman has called autocatalytic cycles—the molecules themselves could be short-lived, so long as the cycles persisted. Now it was dynamical, not thermodynamic, stability that mattered. Ricard Solé of the Santa Fe Institute thinks such jumps might be equivalent to phase transitions in physics, such as the freezing of water or the magnetization of iron: They are collective processes with universal features, and they mean that everything changes, everywhere, all at once. In other words, in this view there's a kind of physics of evolution—and it's a kind of physics we know about already. The Biosphere Creates Its Own Possibilities The tricky thing about functional information is that, unlike a measure such as size or mass, it is contextual: It depends on what we want the object to do, and what environment it is in. For instance, the functional information for an RNA aptamer binding to a particular molecule will generally be quite different from the information for binding to a different molecule. Yet finding new uses for existing components is precisely what evolution does. Feathers did not evolve for flight, for example. This repurposing reflects how biological evolution is jerry-rigged, making use of what's available. Kauffman argues that biological evolution is thus constantly creating not just new types of organisms but new possibilities for organisms, ones that not only did not exist at an earlier stage of evolution but could not possibly have existed. From the soup of single-celled organisms that constituted life on Earth 3 billion years ago, no elephant could have suddenly emerged—this required a whole host of preceding, contingent but specific innovations. However, there is no theoretical limit to the number of uses an object has. This means that the appearance of new functions in evolution can't be predicted—and yet some new functions can dictate the very rules of how the system evolves subsequently. 'The biosphere is creating its own possibilities,' Kauffman said. 'Not only do we not know what will happen, we don't even know what can happen.' Photosynthesis was such a profound development; so were eukaryotes, nervous systems and language. As the microbiologist Carl Woese and the physicist Nigel Goldenfeld put it in 2011, 'We need an additional set of rules describing the evolution of the original rules. But this upper level of rules itself needs to evolve. Thus, we end up with an infinite hierarchy.' The physicist Paul Davies of Arizona State University agrees that biological evolution 'generates its own extended possibility space which cannot be reliably predicted or captured via any deterministic process from prior states. So life evolves partly into the unknown.' 'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms.' Mathematically, a 'phase space' is a way of describing all possible configurations of a physical system, whether it's as comparatively simple as an idealized pendulum or as complicated as all the atoms comprising the Earth. Davies and his co-workers have recently suggested that evolution in an expanding accessible phase space might be formally equivalent to the 'incompleteness theorems' devised by the mathematician Kurt Gödel. Gödel showed that any system of axioms in mathematics permits the formulation of statements that can't be shown to be true or false. We can only decide such statements by adding new axioms. Davies and colleagues say that, as with Gödel's theorem, the key factor that makes biological evolution open-ended and prevents us from being able to express it in a self-contained and all-encompassing phase space is that it is self-referential: The appearance of new actors in the space feeds back on those already there to create new possibilities for action. This isn't the case for physical systems, which, even if they have, say, millions of stars in a galaxy, are not self-referential. 'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms,' said Marcus Heisler, a plant developmental biologist at the University of Sydney and co-author of the incompleteness paper. This connection between biological evolution and the issue of noncomputability, Davies said, 'goes right to the heart of what makes life so magical.' Is biology special, then, among evolutionary processes in having an open-endedness generated by self-reference? Hazen thinks that in fact once complex cognition is added to the mix—once the components of the system can reason, choose, and run experiments 'in their heads'—the potential for macro-micro feedback and open-ended growth is even greater. 'Technological applications take us way beyond Darwinism,' he said. A watch gets made faster if the watchmaker is not blind. Back to the Bench If Hazen and colleagues are right that evolution involving any kind of selection inevitably increases functional information—in effect, complexity—does this mean that life itself, and perhaps consciousness and higher intelligence, is inevitable in the universe? That would run counter to what some biologists have thought. The eminent evolutionary biologist Ernst Mayr believed that the search for extraterrestrial intelligence was doomed because the appearance of humanlike intelligence is 'utterly improbable.' After all, he said, if intelligence at a level that leads to cultures and civilizations were so adaptively useful in Darwinian evolution, how come it only arose once across the entire tree of life? Mayr's evolutionary point possibly vanishes in the jump to humanlike complexity and intelligence, whereupon the whole playing field is utterly transformed. Humans attained planetary dominance so rapidly (for better or worse) that the question of when it will happen again becomes moot. Illustration: Irene Pérez for Quanta Magazine But what about the chances of such a jump happening in the first place? If the new 'law of increasing functional information' is right, it looks as though life, once it exists, is bound to get more complex by leaps and bounds. It doesn't have to rely on some highly improbable chance event. What's more, such an increase in complexity seems to imply the appearance of new causal laws in nature that, while not incompatible with the fundamental laws of physics governing the smallest component parts, effectively take over from them in determining what happens next. Arguably we see this already in biology: Galileo's (apocryphal) experiment of dropping two masses from the Leaning Tower of Pisa no longer has predictive power when the masses are not cannonballs but living birds. Together with the chemist Lee Cronin of the University of Glasgow, Sara Walker of Arizona State University has devised an alternative set of ideas to describe how complexity arises, called assembly theory. In place of functional information, assembly theory relies on a number called the assembly index, which measures the minimum number of steps required to make an object from its constituent ingredients. 'Laws for living systems must be somewhat different than what we have in physics now,' Walker said, 'but that does not mean that there are no laws.' But she doubts that the putative law of functional information can be rigorously tested in the lab. 'I am not sure how one could say [the theory] is right or wrong, since there is no way to test it objectively,' she said. 'What would the experiment look for? How would it be controlled? I would love to see an example, but I remain skeptical until some metrology is done in this area.' Hazen acknowledges that, for most physical objects, it is impossible to calculate functional information even in principle. Even for a single living cell, he admits, there's no way of quantifying it. But he argues that this is not a sticking point, because we can still understand it conceptually and get an approximate quantitative sense of it. Similarly, we can't calculate the exact dynamics of the asteroid belt because the gravitational problem is too complicated—but we can still describe it approximately enough to navigate spacecraft through it. Wong sees a potential application of their ideas in astrobiology. One of the curious aspects of living organisms on Earth is that they tend to make a far smaller subset of organic molecules than they could make given the basic ingredients. That's because natural selection has picked out some favored compounds. There's much more glucose in living cells, for example, than you'd expect if molecules were simply being made either randomly or according to their thermodynamic stability. So one potential signature of lifelike entities on other worlds might be similar signs of selection outside what chemical thermodynamics or kinetics alone would generate. (Assembly theory similarly predicts complexity-based biosignatures.) There might be other ways of putting the ideas to the test. Wong said there is more work still to be done on mineral evolution, and they hope to look at nucleosynthesis and computational 'artificial life.' Hazen also sees possible applications in oncology, soil science and language evolution. For example, the evolutionary biologist Frédéric Thomas of the University of Montpellier in France and colleagues have argued that the selective principles governing the way cancer cells change over time in tumors are not like those of Darwinian evolution, in which the selection criterion is fitness, but more closely resemble the idea of selection for function from Hazen and colleagues. Hazen's team has been fielding queries from researchers ranging from economists to neuroscientists, who are keen to see if the approach can help. 'People are approaching us because they are desperate to find a model to explain their system,' Hazen said. But whether or not functional information turns out to be the right tool for thinking about these questions, many researchers seem to be converging on similar questions about complexity, information, evolution (both biological and cosmic), function and purpose, and the directionality of time. It's hard not to suspect that something big is afoot. There are echoes of the early days of thermodynamics, which began with humble questions about how machines work and ended up speaking to the arrow of time, the peculiarities of living matter, and the fate of the universe. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins
The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins

Forbes

timea day ago

  • Forbes

The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins

In today's column, I examine an AI conjecture known as the infinite Einsteins. The deal is this. By attaining artificial general intelligence (AGI) and artificial superintelligence (ASI), the resulting AI will allegedly provide us with an infinite number of AI-based Einsteins. We could then have Einstein-level intelligence massively available 24/7. The possibilities seem incredible and enormously uplifting. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. Let's momentarily put aside the attainment of ASI and focus solely on the potential achievement of AGI, just for the sake of this discussion. No worries -- I'll bring ASI back into the big picture in the concluding remarks. Imagine that we can arrive at AGI. Since AGI is going to be on par with human intelligence, and since Einstein was a human and made use of his human intelligence, the logical conclusion is that AGI would be able to exhibit the intelligence of the likes of Einstein. The AGI isn't especially simulating Einstein per se. Instead, the belief is that AGI would contain the equivalent of Einstein-level intelligence. If AGI can exhibit the intelligence of Einstein, the next logical assumption is that this Einstein-like capacity could be replicated within the AGI. We already know that even contemporary generative AI allows for the simulation of multiple personas, see my analysis at the link here. With sufficient hardware, the number of personas can be in the millions and billions, see my coverage at the link here. In short, we could have a vast number of Einstein-like intelligence instances running at the same time. You can quibble that this is not the same as saying that the number of Einsteins would be infinite. It would not be an infinite count. There would be some limits involved. Nonetheless, the infinite Einsteins as a catchphrase rolls off the tongue and sounds better than saying the millions or billions of Einsteins. We'll let the use of the word 'infinite' slide in this case and agree that it means a quite large number of instances. Einstein was known for his brilliance when it comes to physics. I brought this up because it is noteworthy that he wasn't known for expertise in biology, medicine, law, or other domains. When someone refers to Einstein, they are essentially emphasizing his expertise in physics, not in other realms. Would an AGI that provides infinite Einsteins then be considered to have heightened expertise solely in physics? To clarify, that would certainly be an amazing aspect. No doubt about it. On the other hand, those infinite Einsteins would presumably have little to offer in other realms such as biology, medicine, etc. Just trying to establish the likely boundaries involved. Imagine this disconcerting scenario. We have infinite Einsteins via AGI. People assume that those Einsteins are brilliant in all domains. The AGI via this capacity stipulates a seeming breakthrough in medicine. But the reality is that this is beyond the purview of the infinite Einsteins and turns out to be incorrect. We might be so enamored with the infinite Einsteins that we fall into the mental trap that anything the AGI emits via that capacity is aboveboard and completely meritorious. Some people are more generalized about the word 'Einstein' and tend to suggest it means being ingenious on an all-around basis. For them, the infinite Einsteins consist of innumerable AI-based geniuses of all kinds. How AGI would model this capacity is debatable. We don't yet know how AGI will work. An open question is whether other forms of intelligence such as emotional intelligence (EQ) get wrapped in the infinite Einsteins. Are we strictly considering book knowledge and straight-ahead intelligence, or shall we toss in all manner of intelligence including the kitchen sink? There is no debate about the clear fact that Einstein made mistakes and was not perfect. He was unsure at times of his theories and proposals. He made mistakes and had to correct his work. Historical reviews point out that he at first rejected quantum mechanics and vowed that God does not play dice, a diss against the budding field of quantum theory. A seemingly big miss. Would AGI that allows for infinite Einsteins be equally flawed in those Einsteins as per the imperfections of the modeled human? This is an important point. Envision that we are making use of AGI and the infinite Einsteins to explore new frontiers in physics. Will we know if those Einsteins are making mistakes? Perhaps they opt to reject ideas that are worthy of pursuit. It seems doubtful that we would seek to pursue those rejected ideas simply due to the powerful assumption that all those Einsteins can't be wrong. Further compounding the issue would be the matter of AI hallucinations. You've undoubtedly heard or read about so-called AI hallucinations. The precept is that sometimes AI generates confabulations, false statements that appear to be true. A troubling facet is that we aren't yet sure when this occurs, nor how to prevent it, and ferreting out AI hallucinations can be problematic (see my extensive explanation at the link here). There is a double-whammy about those infinite Einsteins. By themselves, they presumably would at times make mistakes and be imperfect. They also would be subject to AI hallucinations. The danger is that we would be relying on the aura of those infinite Einsteins as though they are perfect and unquestionably right. Would all the AGI-devised infinite Einsteins be in utter agreement with each other? You might claim that the infinite Einsteins would of necessity be of a like mind and ergo would all agree with each other. They lean in the same direction. Anything they say would be an expression of the collective wisdom of the infinite Einsteins. The downside there is that if the infinite Einsteins are acting like lemmings, this seems to increase the chances of any mistakes being given an overabundance of confidence. Think of it this way. The infinite Einsteins tell us in unison that there's a hidden particle at the core of all physics. Any human trying to disagree is facing quite a gauntlet since an infinite set of Einsteins has made a firm declaration. A healthy principle of science is supposed to be the use of scientific discourse and debate. Would those infinite Einsteins be dogmatic or be willing to engage in open-ended human debates and inquiries? Let's consider that maybe the infinite Einsteins would not be in utter agreement with each other. Perhaps they would among themselves engage in scientific debate. This might be preferred since it could lead to creative ideas and thinking outside the box. The dilemma is what do we do when the infinite Einsteins tell us they cannot agree? Do we have them vote and based on a tally decide that whatever is stated seems to be majority-favored? How might the intellectual battle amid infinite Einsteins be suitably settled? A belief that there will be infinite Einsteins ought to open our eyes to the possibility that there would also be infinite Issac Newton's, Aristotle's, and so on. There isn't any particular reason to restrict AGI to just Einsteins. Things might seem to get out of hand. All these infinite geniuses become mired in disagreements across the board. Who are we to believe? Maybe the Einsteins are convinced by some other personas that up is down and down is up. Endless arguments could consume tons of precious computing cycles. We must also acknowledge that evil doers of historical note could also be part of the infinite series. There could be infinite Genghis Khan's, Joseph Stalin's, and the like. Might they undercut the infinite Einsteins? Efforts to try and ensure that AI aligns with contemporary human values is a vital consideration and numerous pathways are currently being explored, see my discussion at the link here. The hope is that we can stave off the infinite evildoers within AGI. Einstein had grave concerns about the use of atomic weapons. It was his handiwork that aided in the development of the atomic bomb. He found himself mired in concern at what he had helped bring to fruition. An AGI with infinite Einsteins might discover the most wonderful of new inventions. The odds are that those discoveries could be used for the good of humankind or to harm humankind. It is a quandary whether we want those infinite Einsteins widely and in an unfettered way to share what they uncover. Here's the deal. Would we restrict access to the infinite Einsteins so that evildoers could not use the capacity to devise destructive possibilities? That's a lot harder to say than it is to put into implementation. For my coverage of the challenges facing AI safety and security, see the link here. Governments would certainly jockey to use the infinite Einsteins for purposes of gaining geo-political power. A nation that wanted to get on the map as a superpower could readily launch into the top sphere by having the infinite Einsteins provide them with a discovery that they alone would be aware of and exploit. The national and international ramifications would be of great consequence, see my discussion at the link here. I promised at the start of this discussion to eventually bring artificial superintelligence into the matter at hand. The reason that ASI deserves a carve-out is that anything we have to say about ASI is purely blue sky. AGI is at least based on exhibiting intelligence of the kind that we already know and see. True ASI is something that extends beyond our mental reach since it is superintelligence. Let's assume that ASI would not only imbue infinite Einsteins, but it would also go far beyond Einstein-level thinking to super Einstein thresholds. With AGI we might have a solid chance of controlling the infinite Einsteins. Maybe. In the case of ASI, all bets are off. The ASI would be able to run circles around us. Whatever the ASI decides to do with the infinite super Einsteins is likely beyond our control. Congrats, you've now been introduced to the infinite Einsteins conjecture. Let's end for now with a famous quote from Einstein. Einstein made this remark: 'Two things are infinite: the universe and human stupidity, and I'm not sure about the universe.' This highlights whether we could suitably harness an AGI containing infinite Einsteins depends upon human acumen and human stupidity. Hopefully, our better half prevails.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store