logo
Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Forbesa day ago

Speculating on the future of AI including artificial general intelligence (AGI) and artificial ... More superintelligence (ASI).
In today's column, I examine a newly posted blog piece by Sam Altman that has generated quite a bit of hubbub and controversy within the AI community. As the CEO of OpenAI, Sam Altman is considered an AI luminary, of which his viewpoint on the future of AI carries an enormous amount of weight. His latest online commentary contains some eyebrow-raising indications about the current and upcoming status of AI, including aspects partially coated in AI-speak and other insider terminology that require mindful interpretation and translation.
Let's talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
First, some fundamentals are required to set the stage for this discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
In a new posting on June 10, 2025, entitled 'The Gentle Singularity' by Sam Altman on his personal blog, the famed AI prognosticator made these remarks (excerpts):
There's a whole lot in there to unpack.
His upbeat-worded opinion piece contains commentary about many undecided considerations, such as referring to the ill-defined and indeterminate AI event horizon, the impacts of artificial superintelligence, various touted dates that suggest when we can expect things to really take off, hazy thoughts about the nature of the AI singularity, and much more.
Let's briefly explore the mainstay elements.
A big question facing those who are deeply into AI consists of whether we are on the right track to attain AGI and ASI. Maybe we are, maybe we aren't. Sam Altman's reference to the AI event horizon alludes to the existing pathway that we are on, and he unequivocally implies and states that in his opinion, we not only have reached the event horizon but that we are avidly past it already. As espoused, the takeoff has started.
Just to note, that's a claim embodying immense boldness and brashness, and not everyone in AI concurs with that viewpoint.
Consider these vital facets.
First, in favor of that perspective, some insist that the advent of generative AI and large language models (LLMs) vividly demonstrates that we are now absolutely on the path toward AGI/ASI. The incredible semblance of natural language fluency exhibited by the computational capabilities of contemporary LLMs seems to be a sure sign that the road ahead must lead to AGI/ASI.
However, not everyone is convinced that LLMs constitute the appropriate route. There are qualms that we already are witnessing headwinds on how much generative AI can be further extended, see my coverage at the link here. Perhaps we are nearing a severe roadblock, and continued efforts will not get us any further bang for the buck.
Worse still, we might be off-target and going in the wrong direction altogether.
Nobody can say for sure whether we are on the right path or not. It is a guess. Well, Sam Altman has planted a flag that we are incontrovertibly on the right path and that we've already zipped down the roadway quite a distance. Cynics might find this a self-serving perspective since it reinforces and reaffirms the direction that OpenAI is currently taking.
Time will tell, as they say.
Another consideration in the AI field is that perhaps there will be a kind of singularity that serves as a key point at which AGI or ASI will readily begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. For my detailed explanation of the postulated AI singularity, see the link here.
Some believe that the AI singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. One moment we will be working stridently on pushing AI forward, and then, bam, the singularity occurs. It is envisioned as a type of intelligence explosion, whereby intelligence rapidly begets more intelligence. After the singularity happens, AI will be leaps and bounds better than it just was. In fact, it could be that we will have a fully complete AGI or ASI due to the singularity. One second earlier, we had plain AI, while an instant later we amazingly have AGI or ASI in our midst, like a rabbit out of a hat.
Perhaps though the singularity will be a long and drawn-out activity.
There are those who speculate the singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe the AI singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Additionally, there might not be anything resembling a singularity at all, and we've just concocted some zany theory that has no basis in reality.
Sam Altman's posting seems to suggest that the AI singularity is already underway (or, maybe happening in 2030 or 2035) and that it will be a gradual emerging phenomenon, rather than an instantaneous one.
Interesting conjecture.
Right now, efforts to forecast when AGI and ASI are going to be attained are generally based on putting a finger up into prevailing AI winds and wildly gauging potential dates. Please be aware that the hypothesized dates have very little evidentiary basis to them.
There are many highly vocal AI luminaires making brazen AGI/ASI date predictions. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. See my analysis of those dates at the link here.
A somewhat quieter approach to the gambit of date guessing is via the use of surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040.
Depending on how you interpret Sam Altman's latest blog post, it isn't clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI since he refers to superintelligence, which might be his way of expressing ASI or maybe AGI. There is a muddiness of differentiating AGI from ASI. Indeed, I've previously covered his changing definitions associated with AGI and ASI, i.e., moving of the cheese, at the link here.
We'll know how things turned out in presumably a mere 5 to 10 years. Mark your calendars accordingly.
An element of the posting that has gotten the gall of especially AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. We are in a gentle singularity. That's certainly happy news for the world at large. Utopia awaits.
There is a decidedly other side to that coin.
AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI or x-risk.
The other camp entails the so-called AI accelerationists.
They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned.
No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here.
You can readily discern which camp the posting sides with, namely roses and fine wine.
It is important to carefully assess the myriads of pronouncements and proclamations being made about the future of AI. Oftentimes, the wording appears to brazenly assert that the future is utterly known and predictable. With a sense of flair and confidence, many of these prognostications can be easily misread as somehow a bushel of facts and knowns, rather than a bundle of opinions and conjecture.
Franklin D. Roosevelt wisely stated: 'There are as many opinions as there are experts.' Keep your eyes and ears open and be prudently mindful of all prophecies concerning the future of AI.
You'll be immeasurably glad you were cautious and alert.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Second Major Tech Outage of the Week Affects Millions
Second Major Tech Outage of the Week Affects Millions

Yahoo

time13 minutes ago

  • Yahoo

Second Major Tech Outage of the Week Affects Millions

The world was momentarily stopped in some regards this week when AI service ChatGPT suffered a massive outage on Tuesday. The artificial intelligence experienced issues for more than 10 hours, disrupting workflows and causing other problems all over the globe. OpenAI noted the outage early on, but it took several hours to fix the issue - the entire work day, according to the company. At the peak of the outage, there were nearly 500 reports per hour, showing how widespread the issues were. On Thursday, another massive tech outage has stopped the world in its tracks, holding up progress for several apps and services. Amazon Web Services, Google, OpenAI and Spotify were all down on Thursday, according to Cloudflare. The company, which provides integral services for the affected brands, noted that it was experiencing intermittent issues during the day. The company gave an update later in the afternoon, letting customers know that the services were gradually starting to come back online. "We still expect to see intermittent errors across the impacted services as systems handle retried and caches are filled," Cloudflare said on Thursday. A spokesperson for Cloudflare told CBS News that Google Cloud seems to be the problem point for the outages. "A limited number of services at Cloudflare use Google Cloud and were impacted," the representative said. "We expect them to come back shortly. The core Cloudflare services were not impacted." Google reported nearly 13,000 reports of outages on its service on Thursday, according to The company announced that most of the services. had been restored in the late afternoon, but did not elaborate on which would be fully operational. Spotify was the most affected service on Thursday, with nearly 50,000 reported outages. Other services operated on the Google Cloud framework include Discord, Snapchat, YouTube, Etsy, UPS and Verizon. The music streaming platform said last year that it has an average of 675 million monthly users, meaning that the outage likely kept a lot of fans from listening to their favorite artists. As the situation improves, we'll see if all of the affected services are back up and running before the weekend. Second Major Tech Outage of the Week Affects Millions first appeared on Men's Journal on Jun 12, 2025

Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team
Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team

Yahoo

time28 minutes ago

  • Yahoo

Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team

Meta said Thursday it is making a large investment in artificial intelligence company Scale and recruiting its CEO Alexandr Wang to join a team developing 'superintelligence' at the tech giant. The move reflects a push by Meta CEO Mark Zuckerberg to revive AI efforts at the parent company of Facebook and Instagram as it faces tough competition from rivals such as Google and OpenAI. Meta announced what it called a 'strategic partnership and investment' with Scale late Thursday but didn't disclose the financial terms of the deal. Scale said the added investment puts its market value at over $29 billion. Scale said it will remain an independent company but the agreement will 'substantially expand Scale and Meta's commercial relationship.' Meta will hold a minority of Scale's outstanding equity. Wang, though joining Meta, will remain on Scale's board of directors. Replacing him is a new interim Scale CEO Jason Droege, who was previously the company's chief strategy officer and had past executive roles at Uber Eats and Axon. It won't be the first time a big tech company has gobbled up talent and products at innovative AI startups without formally acquiring them. Microsoft hired key staff from startup Inflection AI, including co-founder and CEO Mustafa Suleyman, who now runs Microsoft's AI division. Google pulled in the leaders of AI chatbot company while Amazon made a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept's AI systems and datasets. Wang was a 19-year-old student at the Massachusetts Institute of Technology when he and co-founder Lucy Guo started Scale in 2016. They won influential backing that summer from the startup incubator Y Combinator, which was led at the time by Sam Altman, now the CEO of OpenAI. Wang dropped out of MIT, following a trajectory similar to that of Zuckerberg, who quit Harvard University to start Facebook more than a decade earlier. Scale's pitch was to supply the human labor needed to improve AI systems, hiring workers to draw boxes around a pedestrian or a dog in a street photo so that self-driving cars could better predict what's in front of them. General Motors and Toyota have been among Scale's customers. What Scale offered to AI developers was a more tailored version of Amazon's Mechanical Turk, which had long been a go-to service for matching freelance workers with temporary online jobs. More recently, the growing commercialization of AI large language models — the technology behind OpenAI's ChatGPT, Google's Gemini and Meta's Llama — brought a new market for Scale's annotation teams. The company claims to service 'every leading large language model,' including from Anthropic, OpenAI, Meta and Microsoft, by helping to fine tune their training data and test their performance. It's not clear what the Meta deal will mean for Scale's other customers. Wang has also sought to build close relationships with the U.S. government, winning military contracts to supply AI tools to the Pentagon and attending President Donald Trump's inauguration. The head of Trump's science and technology office, Michael Kratsios, was an executive at Scale for the four years between Trump's first and second terms. Meta has also begun providing AI services to the federal government. Meta has taken a different approach to AI than many of its rivals, releasing its flagship Llama system for free as an open-source product that enables people to use and modify some of its key components. Meta says more than a billion people use its AI products each month, but it's also widely seen as lagging behind competitors such as OpenAI and Google in encouraging consumer use of large language models, also known as LLMs. It hasn't yet released its purportedly most advanced model, Llama 4 Behemoth, despite previewing it in April as "one of the smartest LLMs in the world and our most powerful yet.' Meta's chief AI scientist Yann LeCun, who in 2019 was a winner of computer science's top prize for his pioneering AI work, has expressed skepticism about the tech industry's current focus on large language models. 'How do we build AI systems that understand the physical world, that have persistent memory, that can reason and can plan?' LeCun asked at a French tech conference last year. These are all characteristics of intelligent behavior that large language models 'basically cannot do, or they can only do them in a very superficial, approximate way,' LeCun said. Instead, he emphasized Meta's interest in 'tracing a path towards human-level AI systems, or perhaps even superhuman.' When he returned to France's annual VivaTech conference again on Wednesday, LeCun dodged a question about the pending Scale deal but said his AI research team's plan has 'always been to reach human intelligence and go beyond it.' 'It's just that now we have a clearer vision for how to accomplish this,' he said. LeCun co-founded Meta's AI research division more than a decade ago with Rob Fergus, a fellow professor at New York University. Fergus later left for Google but returned to Meta last month after a 5-year absence to run the research lab, replacing longtime director Joelle Pineau. Fergus wrote on LinkedIn last month that Meta's commitment to long-term AI research 'remains unwavering' and described the work as 'building human-level experiences that transform the way we interact with technology.'

Power station deal: The GRECELL T300 is just $121.49 right now!
Power station deal: The GRECELL T300 is just $121.49 right now!

Android Authority

time44 minutes ago

  • Android Authority

Power station deal: The GRECELL T300 is just $121.49 right now!

Edgar Cervantes / Android Authority I got to test the GRECELL T300 Portable Power Station, and I must say it is one of my favorite options in terms of portability. It's still pretty portable and easy to carry, but it offers much more power than your typical power banks. It's also cheaper, especially today. You can take the GRECELL T300 home for just $121.49. Buy the GRECELL T300 Portable Power Station for just $121.49 ($48.50 off) This offer is available from Amazon. You must keep in mind that max savings can only be had by using two on-page coupons. First, apply the $40 discount, then click Redeem on the extra 5% coupon. This is the most portable power station we have tested, making it an excellent option for those who want a battery they can truly take around on their adventures. It weighs only 8.25 pounds, and measures in at 9.6 x 6.7 x 6.9 inches. It comes with a 230.8Wh battery capacity. To put this into perspective, that's enough to charge a phone about 25 times, or a laptop about 6 times. It can do much more than that, though. It has a max stable output of 330W, with support for 600W surges. This means it can also run more power-hungry things like a mini fridge, or even some TVs. Edgar Cervantes / Android Authority It comes with plenty of ports, including an AC outlet, two USB-C ports, two USB-A connections, and a car socket. The AC outlet is the only one that can reach the maximum 330W. One USB-C port can handle 60W, the other 18W, and both USB-A ports max out at 18W. Extra features include the addition of a very handy flashlight. And if you need to go off-grid for a bit, it supports an optional 40W solar panel. If you're looking for a portable battery that can do more than simple battery packs, and is at a good price, this is your best bet! I mean, considering today's discount, this price isn't much higher than many less capable power banks!

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store