logo
#

Latest news with #AIinsiders

The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins
The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins

Forbes

time14 hours ago

  • Science
  • Forbes

The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins

In today's column, I examine an AI conjecture known as the infinite Einsteins. The deal is this. By attaining artificial general intelligence (AGI) and artificial superintelligence (ASI), the resulting AI will allegedly provide us with an infinite number of AI-based Einsteins. We could then have Einstein-level intelligence massively available 24/7. The possibilities seem incredible and enormously uplifting. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. Let's momentarily put aside the attainment of ASI and focus solely on the potential achievement of AGI, just for the sake of this discussion. No worries -- I'll bring ASI back into the big picture in the concluding remarks. Imagine that we can arrive at AGI. Since AGI is going to be on par with human intelligence, and since Einstein was a human and made use of his human intelligence, the logical conclusion is that AGI would be able to exhibit the intelligence of the likes of Einstein. The AGI isn't especially simulating Einstein per se. Instead, the belief is that AGI would contain the equivalent of Einstein-level intelligence. If AGI can exhibit the intelligence of Einstein, the next logical assumption is that this Einstein-like capacity could be replicated within the AGI. We already know that even contemporary generative AI allows for the simulation of multiple personas, see my analysis at the link here. With sufficient hardware, the number of personas can be in the millions and billions, see my coverage at the link here. In short, we could have a vast number of Einstein-like intelligence instances running at the same time. You can quibble that this is not the same as saying that the number of Einsteins would be infinite. It would not be an infinite count. There would be some limits involved. Nonetheless, the infinite Einsteins as a catchphrase rolls off the tongue and sounds better than saying the millions or billions of Einsteins. We'll let the use of the word 'infinite' slide in this case and agree that it means a quite large number of instances. Einstein was known for his brilliance when it comes to physics. I brought this up because it is noteworthy that he wasn't known for expertise in biology, medicine, law, or other domains. When someone refers to Einstein, they are essentially emphasizing his expertise in physics, not in other realms. Would an AGI that provides infinite Einsteins then be considered to have heightened expertise solely in physics? To clarify, that would certainly be an amazing aspect. No doubt about it. On the other hand, those infinite Einsteins would presumably have little to offer in other realms such as biology, medicine, etc. Just trying to establish the likely boundaries involved. Imagine this disconcerting scenario. We have infinite Einsteins via AGI. People assume that those Einsteins are brilliant in all domains. The AGI via this capacity stipulates a seeming breakthrough in medicine. But the reality is that this is beyond the purview of the infinite Einsteins and turns out to be incorrect. We might be so enamored with the infinite Einsteins that we fall into the mental trap that anything the AGI emits via that capacity is aboveboard and completely meritorious. Some people are more generalized about the word 'Einstein' and tend to suggest it means being ingenious on an all-around basis. For them, the infinite Einsteins consist of innumerable AI-based geniuses of all kinds. How AGI would model this capacity is debatable. We don't yet know how AGI will work. An open question is whether other forms of intelligence such as emotional intelligence (EQ) get wrapped in the infinite Einsteins. Are we strictly considering book knowledge and straight-ahead intelligence, or shall we toss in all manner of intelligence including the kitchen sink? There is no debate about the clear fact that Einstein made mistakes and was not perfect. He was unsure at times of his theories and proposals. He made mistakes and had to correct his work. Historical reviews point out that he at first rejected quantum mechanics and vowed that God does not play dice, a diss against the budding field of quantum theory. A seemingly big miss. Would AGI that allows for infinite Einsteins be equally flawed in those Einsteins as per the imperfections of the modeled human? This is an important point. Envision that we are making use of AGI and the infinite Einsteins to explore new frontiers in physics. Will we know if those Einsteins are making mistakes? Perhaps they opt to reject ideas that are worthy of pursuit. It seems doubtful that we would seek to pursue those rejected ideas simply due to the powerful assumption that all those Einsteins can't be wrong. Further compounding the issue would be the matter of AI hallucinations. You've undoubtedly heard or read about so-called AI hallucinations. The precept is that sometimes AI generates confabulations, false statements that appear to be true. A troubling facet is that we aren't yet sure when this occurs, nor how to prevent it, and ferreting out AI hallucinations can be problematic (see my extensive explanation at the link here). There is a double-whammy about those infinite Einsteins. By themselves, they presumably would at times make mistakes and be imperfect. They also would be subject to AI hallucinations. The danger is that we would be relying on the aura of those infinite Einsteins as though they are perfect and unquestionably right. Would all the AGI-devised infinite Einsteins be in utter agreement with each other? You might claim that the infinite Einsteins would of necessity be of a like mind and ergo would all agree with each other. They lean in the same direction. Anything they say would be an expression of the collective wisdom of the infinite Einsteins. The downside there is that if the infinite Einsteins are acting like lemmings, this seems to increase the chances of any mistakes being given an overabundance of confidence. Think of it this way. The infinite Einsteins tell us in unison that there's a hidden particle at the core of all physics. Any human trying to disagree is facing quite a gauntlet since an infinite set of Einsteins has made a firm declaration. A healthy principle of science is supposed to be the use of scientific discourse and debate. Would those infinite Einsteins be dogmatic or be willing to engage in open-ended human debates and inquiries? Let's consider that maybe the infinite Einsteins would not be in utter agreement with each other. Perhaps they would among themselves engage in scientific debate. This might be preferred since it could lead to creative ideas and thinking outside the box. The dilemma is what do we do when the infinite Einsteins tell us they cannot agree? Do we have them vote and based on a tally decide that whatever is stated seems to be majority-favored? How might the intellectual battle amid infinite Einsteins be suitably settled? A belief that there will be infinite Einsteins ought to open our eyes to the possibility that there would also be infinite Issac Newton's, Aristotle's, and so on. There isn't any particular reason to restrict AGI to just Einsteins. Things might seem to get out of hand. All these infinite geniuses become mired in disagreements across the board. Who are we to believe? Maybe the Einsteins are convinced by some other personas that up is down and down is up. Endless arguments could consume tons of precious computing cycles. We must also acknowledge that evil doers of historical note could also be part of the infinite series. There could be infinite Genghis Khan's, Joseph Stalin's, and the like. Might they undercut the infinite Einsteins? Efforts to try and ensure that AI aligns with contemporary human values is a vital consideration and numerous pathways are currently being explored, see my discussion at the link here. The hope is that we can stave off the infinite evildoers within AGI. Einstein had grave concerns about the use of atomic weapons. It was his handiwork that aided in the development of the atomic bomb. He found himself mired in concern at what he had helped bring to fruition. An AGI with infinite Einsteins might discover the most wonderful of new inventions. The odds are that those discoveries could be used for the good of humankind or to harm humankind. It is a quandary whether we want those infinite Einsteins widely and in an unfettered way to share what they uncover. Here's the deal. Would we restrict access to the infinite Einsteins so that evildoers could not use the capacity to devise destructive possibilities? That's a lot harder to say than it is to put into implementation. For my coverage of the challenges facing AI safety and security, see the link here. Governments would certainly jockey to use the infinite Einsteins for purposes of gaining geo-political power. A nation that wanted to get on the map as a superpower could readily launch into the top sphere by having the infinite Einsteins provide them with a discovery that they alone would be aware of and exploit. The national and international ramifications would be of great consequence, see my discussion at the link here. I promised at the start of this discussion to eventually bring artificial superintelligence into the matter at hand. The reason that ASI deserves a carve-out is that anything we have to say about ASI is purely blue sky. AGI is at least based on exhibiting intelligence of the kind that we already know and see. True ASI is something that extends beyond our mental reach since it is superintelligence. Let's assume that ASI would not only imbue infinite Einsteins, but it would also go far beyond Einstein-level thinking to super Einstein thresholds. With AGI we might have a solid chance of controlling the infinite Einsteins. Maybe. In the case of ASI, all bets are off. The ASI would be able to run circles around us. Whatever the ASI decides to do with the infinite super Einsteins is likely beyond our control. Congrats, you've now been introduced to the infinite Einsteins conjecture. Let's end for now with a famous quote from Einstein. Einstein made this remark: 'Two things are infinite: the universe and human stupidity, and I'm not sure about the universe.' This highlights whether we could suitably harness an AGI containing infinite Einsteins depends upon human acumen and human stupidity. Hopefully, our better half prevails.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store