
What Is Superintelligence? Everything You Need to Know About AI's Endgame
Some people use the term "superintelligence" interchangeably with artificial general intelligence or sci-fi-level sentience. Others, like Meta CEO Mark Zuckerberg, use it to signal their next big moonshot.
ASI has a more specific meaning in AI circles. It refers to an intelligence that doesn't just answer questions but could outthink humans in every field: medicine, physics, strategy, creativity, reasoning, emotional intelligence and more. We're not there yet, but the race has already started.
In July, Zuckerberg said during an interview with The Information that his company is chasing "personal superintelligence" to "put the power of AI directly into individuals' hands." Or, in Meta's case, probably in everyone's smart glasses.
Scott Stein/CNET
That desire kicked off a recruiting spree for top researchers in Silicon Valley and a reshuffling inside Meta's FAIR team (now Meta AI) to push Meta closer to AGI and eventually ASI.
So, what exactly is superintelligence, how close are we to it, and should we be excited or terrified? Let's break it down.
What is superintelligence?
Superintelligence doesn't have a formal definition, but it's generally described as a hypothetical AI system that would outperform humans at every cognitive task. It could process vast amounts of data instantly, reason across domains, learn from mistakes, self-improve, develop new scientific theories, write flawless code, and maybe even make emotional or ethical judgments.
The idea became popularized through philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies , which warned of a scenario where an AI bot becomes smarter than humans, self-improves rapidly and then escapes our control. That vision sparked both excitement and fear among tech experts.
Speaking to CNET, Bostrom says many of his 2014 warnings "have proven quite prescient." What has surprised him, he says, is "how anthropomorphic current AI systems are," with large language models behaving in surprisingly humanlike ways.
Bostrom says he's now shifting his attention toward deeper issues, including "the moral status of digital minds and the relationship between the superintelligence we build with other superintelligences," which he refers to as "the cosmic host."
For some, ASI represents the pinnacle of progress, a tool to cure disease, reverse climate change and crack the secrets of the universe. For others, it's a ticking time bomb -- one wrong move and we're outmatched by a machine we can't control.
It's sometimes called the last human invention, not because it's final, but because ASI could invent everything else we need. British mathematician Irving John Good described it as an "intelligence explosion."
Superintelligence doesn't exist yet. We're still in the early stages of what's called artificial narrow intelligence. It's an AI system that is great at specific tasks like translation, summarization and image generation, but not capable of broader reasoning. Tools like ChatGPT, Gemini, Copilot, Claude and Grok fall into this category. They're good at some tasks, but still flawed, prone to hallucinations and incapable of true reasoning or understanding.
To reach ASI, AI needs to first pass through another stage: artificial general intelligence.
What is AGI?
AGI, or artificial general intelligence, refers to a system that can learn and reason across a wide range of tasks, not just one domain. It could match human-level versatility, such as learning new skills, adapting to unfamiliar problems and transferring knowledge across fields.
Unlike current chatbots, which rely heavily on training data and struggle outside of predefined rules, AGI would handle complex problems flexibly. It wouldn't just answer questions about math and history; it could invent new solutions, explain them and apply them elsewhere.
Current models hint at AGI traits, like multimodal systems that handle text, images and video. But true AGI requires breakthroughs in continual learning (updating knowledge without forgetting old stuff) and real-world grounding (understanding context beyond data). And none of the major models today qualify as true AGI, though many AI labs, including OpenAI, Google DeepMind and Meta, list it as their long-term target.
Once AGI arrives and self-improves, ASI could follow quickly as a system smarter than any human in every area.
How close are we to superintelligence?
A superintelligent future concept I generated using Grok AI.
Grok / Screenshot by CNET
That depends on who you ask. A 2024 survey of 2,778 AI researchers paints a sobering picture. The aggregate forecasts give a 50% chance of machines outperforming humans in every possible task by 2047. That's 13 years sooner than a 2022 poll predicted. There's a 10% chance this could happen as early as 2027, according to the survey.
For job automation specifically, researchers estimate a 10% chance that all human occupations become fully automatable by 2037, reaching 50% probability by 2116.
Most concerning, 38% to 51% of experts assign at least a 10% risk of advanced AI causing human extinction.
Geoffrey Hinton, often called the Godfather of AI, warned in a recent YouTube podcast that if superintelligent AI ever turned against us, it might unleash a biological threat like a custom virus -- super contagious, deadly and slow to show symptoms -- without risking itself.
Resistance would be pointless, he said, because "there's no way we're going to prevent it from getting rid of us if it wants to." Instead, he argued that the focus should be on building safeguards early.
"What you have to do is prevent it ever wanting to," he said in the podcast. He said this could be done by pouring resources into AI that stays friendly.
Still, Hinton confessed he's struggling with the implications: "I haven't come to terms with what the development of superintelligence could do to my children's future. I just don't like to think about what could happen."
Factors like faster computing, quantum AI and self-improving models could accelerate things. Hinton expects superintelligence in 10 to 20 years. Zuckerberg said during that podcast that he believes ASI could arrive within the next two to three years, and OpenAI CEO Sam Altman estimates it'll be somewhere in between those time frames.
Most researchers agree we're still missing key ingredients, like more advanced learning algorithms, better hardware and the ability to generalize knowledge like a human brain. IBM points to areas like neuromorphic computing (hardware inspired by human neurons), evolutionary algorithms and multisensory AI as building blocks that might get us there.
Meta's quest for 'personal superintelligence'
Meta launched its Superintelligence Labs in June, led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub CEO), with $14.3 billion invested in Scale AI and $64 billion to $72 billion for data centers and AI infrastructure.
Zuckerberg doesn't shy away from Greek mythology, with names like Prometheus and Hyperion for his two AI data superclusters (massive computing centers). He also doesn't talk about artificial superintelligence in abstract terms. Instead, he claims that Meta's specific focus is on delivering "personal super intelligence to everyone in the world." This vision, according to Zuckerberg, sets Meta apart from other research labs that he says primarily concentrate on "automating economically productive work."
Bostrom thinks this isn't mere hype. "It's possible we're only a small number of years away from this," he said of Meta's plans, noting that today's frontier labs "are quite serious about aiming for superintelligence, so it is not just marketing moves."
Though still in its early stages, Meta is actively recruiting top talent from companies like OpenAI and Google. Zuckerberg explained in his interview with The Information that the market is extremely competitive because so few people possess the requisite high level of skills. Facebook and Zuckerberg didn't respond to requests for comment.
Should humans subscribe to the idea of superintelligent AI?
There are two camps in the AI world: those who are overly enthusiastic, inflating its benefits and seemingly ignoring its downsides; and the doomers who believe AI will inevitably take over and end humanity. The truth probably lands somewhere in the middle. Widespread public fear and resistance, fueled by dystopian sci-fi and very real concerns over job loss and massive economic disruption, could slow progress toward superintelligence.
One of the biggest problems is that we don't really know what even AGI looks like in machines, much less ASI. Is it the ability to reason across domains? Hold long conversations? Form intentions? Build theories? None of the current models, including Meta's Llama 4 and Grok 4, can reliably do any of this.
There's also no agreement on what counts as "smarter than humans." Does it mean acing every test, inventing new math and physics theorems or solving climate change?
And even if we get there -- should we? Building systems vastly more intelligent than us could pose serious risks, especially if they act unpredictably or pursue goals misaligned with ours.
Without strict control, it could manipulate systems or even act autonomously in ways we don't fully understand. Brendan Englot, director of the Stevens Institute for Artificial Intelligence, shared with CNET that he believes "an important first step is to approach cyber-physical security similarly to how we would prepare for malicious human-engineered threats, except with the expectation that they can be generated and launched with much greater ease and frequency than ever before."
That said, Englot isn't convinced that current AI can truly outpace human understanding.
"AI is limited to acting within the boundaries of our existing knowledge base," Englot tells CNET. "It is unclear when and how that will change."
Regulations like the EU AI Act aim to help, but global alignment is tricky. For example, China's approach differs wildly from the West's.
Trust is one of the biggest open questions. A superintelligent system might be incredibly useful, but also nearly impossible to audit or constrain. And when AI systems draw from biased or chaotic data like real-time social media, those problems compound.
Some researchers believe that given enough data, computing power and clever model design, we'll eventually reach AGI and ASI. Others argue that current AI approaches (especially LLMs) are fundamentally limited and won't scale to true general or superhuman intelligence because the human brain has 100 trillion connections. That's not even accounting for our capability of emotional experience and depth, arguably humanity's strongest and most distinctive attribute.
But progress moves fast, and it would be naive to dismiss ASI as impossible. If it does arrive, it could reshape science, economics and politics -- or threaten them all. Until then, general intelligence remains the milestone to watch.
If and when superintelligence does become a reality, it could profoundly redefine human life itself. According to Bostrom, we'd enter what he calls a "post-instrumental condition," fundamentally rethinking what it means to be human. Still, he's ultimately optimistic about what lies on the other side, exploring these ideas further in his most recent book, Deep Utopia.
"It will be a profound transformation," Bostrom tells CNET.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 minutes ago
- Yahoo
Tesla awards CEO Musk millions of shares valued at about 29 billion dollars
Tesla is awarding CEO Elon Musk 96 million shares of restricted stock valued at approximately 29 billion US dollars (£22 billion), just six months after a judge ordered the company to revoke his massive pay package. The electric vehicle maker said in a regulatory filing on Monday that Mr Musk must first pay Tesla 23.34 dollars per share of restricted stock that vests, which is equal to the exercise price per share of the 2018 pay package that was awarded to the company's CEO. In December, Delaware chancellor Kathaleen St Jude McCormick reaffirmed her earlier ruling that Tesla must revoke Mr Musk's multibillion-dollar pay package. She found that Mr Musk engineered the landmark pay package in sham negotiations with directors who were not independent. At the time, Ms McCormick also rejected an equally unprecedented and massive fee request by plaintiff attorneys, who argued that they were entitled to legal fees in the form of Tesla stock valued at more than five billion dollars. The judge said the attorneys were entitled to a fee award of 345 million dollars. The rulings came in a lawsuit filed by a Tesla stockholder who challenged Mr Musk's 2018 compensation package. That pay package carried a potential maximum value of about 56 billion dollars, but that sum has fluctuated over the years based on Tesla's stock price. Mr Musk appealed against the order in March. A month later Tesla said in a regulatory filing that it was creating a special committee to look at Mr Musk's compensation as CEO. Wedbush analyst Dan Ives feels Mr Musk's stock award may alleviate some Tesla shareholder concerns. 'We believe this grant will now keep Musk as CEO of Tesla at least until 2030 and removes an overhang on the stock,' Mr Ives wrote in a client note. 'Musk remains Tesla's big asset and this comp issue has been a constant concern of shareholders once the Delaware soap opera began.' Tesla shares have plunged 25% this year, largely due to blowback over Mr Musk's affiliation with President Donald Trump. But Tesla also faces intensifying competition from both the big Detroit car makers, and from China. In its most recent quarter, Tesla reported that quarterly profits plunged from 1.39 billion dollars to 409 million dollars. Revenue also fell and the company fell short of even the lowered expectations on Wall Street. Under pressure from shareholders last month, Tesla scheduled an annual shareholders meeting for November to comply with Texas state law. A group of more than 20 Tesla shareholders, which have watched Tesla shares plummet, said in a letter to the company that it needed to at least provide public notice of the annual meeting. Solve the daily Crossword
Yahoo
3 minutes ago
- Yahoo
XRP Climbs 3% But One Key Indicator Just Flashed A Death Cross
XRP (CRYPTO: XRP) is up 3% on the day but technical indicators suggest traders should tread with caution in the near Ticker Price Market Cap 7-Day Trend Bitcoin (CRYPTO: BTC) $114,413.52 $2.27 trillion -3.8% Ethereum (CRYPTO: ETH) $3,568.24 $430.7 billion -8% XRP (CRYPTO: XRP) $2.99 $177.5 billion -7.4% Trader Notes: Crypto analyst Ali Martinez flagged the MVRV ratio flashing a death cross on XRP's chart, suggesting bearish pressure could Henry noted that XRP is forming its largest bullish pennant to date, maintaining support with a series of green candle closes, often a sign of building momentum. A breakout targeting $4 could be imminent, he added. Community News: Ripple CTO David Schwartz announced that he is launching a new independent infrastructure project for the XRP Ledger. He will personally manage a standalone server that functions as a high-quality hub, designed to support validators on the Unique Node List, as well as other hubs and XRPL-based applications. Schwartz emphasized that this dedicated hub would improve network resilience and efficiency, further strengthening the XRPL ecosystem. Read Next: Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? This article XRP Climbs 3% But One Key Indicator Just Flashed A Death Cross originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.
Yahoo
3 minutes ago
- Yahoo
Tesla board approves $30 billion alternate pay deal for Musk
Tesla (TSLA)'s board approved a $30 billion alternate compensation plan for its billionaire CEO Elon Musk, as the company and Musk continue a court battle to reinstate his $58 billion pay award struck down by a Delaware judge. The company announced the new plan in a Securities and Exchange Commission filing on Monday, emphasizing its goal to incentivize Musk to focus on Tesla. The new accord must be approved by Tesla's shareholders. It would act as an alternative method to pay Musk in the event that his original pay deal, first reached in 2018, is ultimately struck down in court. Musk has not been paid under his original deal. 'Today we announce an important first step in compensating Elon Musk for his extraordinary work at Tesla,' the company's directors Robyn Denholm and Kathleen Wilson-Thompson, who approved the plan, wrote in a letter to shareholders. 'As you know, Elon has not received meaningful compensation for eight years since the 2012 CEO Performance Award was last earned in 2017,' the board members added. 'To recognize what Elon has accomplished and the extraordinary value he delivered to Tesla and our shareholders, we believe we must take action to honor the bargain that was struck in 2018.' Tesla's stock came under pressure on July 24 after it posted an earnings and revenue miss in the second quarter. Musk hinted at a "few rough quarters" amid sagging sales. For the second quarter, Tesla reported revenue of $22.50 billion vs. $22.64 billion expected, per Bloomberg consensus. The revenue decrease was 12% lower than Tesla's second quarter last year where it drove in $25.05 billion. The new compensation plan grants Musk 96 million Tesla shares of common stock at $23.34 per share so long as he remains in continuous service until August 3, 2027 as CEO or as an executive officer of Tesla responsible for product development or operations. The share grant, which matches the exercise price of his 2018 CEO award, is also subject to a mandatory holding period of five years from the grant date, except to cover tax payments or the purchase price of the shares. If the Delaware Supreme Court overturns the Delaware Chancery Court's decision to invalidate the 2018 plan in a non-appealable judgement, any shares granted to Musk under the new plan must be immediately forfeited and returned to Tesla. The case is pending before the court with no set date for a ruling. Musk's original pay plan was thrown into question when a Tesla shareholder sued the company in June 2018 alleging that the shareholder vote that approved the plan was invalid. Delaware Chancery Court judge Kathaleen McCormick agreed with the claim and denied the compensation package once in January 2024, and again for a second time in December, after Tesla shareholders reapproved it. The ruling raised new concerns about Musk's motivations to give as much time as he has to Tesla. Even before the 2018 pay package was invalidated by the Delaware court, Musk's attention was divided between his ventures SpaceX, (formerly Twitter), and the Boring Co., among other pursuits. That attention was further divided in January when Musk joined the Trump administration to launch the budget-cutting endeavor DOGE. The novel question at the heart of the ongoing lawsuit could not only validate or invalidate Musk's original pay deal but could remake the rules of corporate law: Can stockholders ever overrule a judge? Chancellor McCormick initially voided Musk's 2018 pay pact because of what she called "extensive ties" between Musk and the Tesla board members negotiating the pay package and a lack of public disclosure about Musk's relationships with those who approved the deal. She ruled that Musk, a minority owner of Tesla, held enough influence over the electric car company to count as its de facto controlling shareholder, who must be held to a stricter level of legal scrutiny. When Tesla shareholders approved the pay package a second time in June 2024, Tesla argued that the additional stamp of approval should be enough for McCormick to toss out her initial decision. A single independent director, Kathleen Wilson-Thompson, a former executive at Walgreens Boots Alliance and Kellogg, approved the second package. However, McCormick did not address whether Wilson-Thompson's due diligence fulfilled Delaware's requirements. One reason Tesla fought to reinstate the original accord, rather than fashion a new one, is the potential cost to replace it. In its 2018 proxy statement, Tesla said it took a $2.3 billion accounting charge in connection with the package. Later, in a court filing in the litigation, Tesla said an equivalent deal would require it to take a charge exceeding $25 billion. That impact on Tesla won't be known until the appeals process plays out. Denholm and Wilson-Thompson stressed in their letter to Tesla shareholders the importance of retaining and motivating talent in the intensifying war to build AI-powered technology, where nine-figure cash compensation packages for non-founder, individual AI engineers has become a norm. Wedbush technology analyst Dan Ives echoed that message in note on Monday. "We believe this grant will now keep Musk as CEO of Tesla at least until 2030 and removes an overhang on the stock," Ives said, adding that the Big Tech AI talent wars were fully underway. "Musk remains Tesla's big asset and this comp issue has been a constant concern of shareholders once the Delaware soap opera began." Alexis Keenan is a legal reporter for Yahoo Finance. Follow Alexis on X @alexiskweed. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data