
AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier
In today's column, I examine an unresolved question about the nature of human intelligence, which in turn has a great deal to do with AI, especially regarding achieving artificial general intelligence (AGI) and potentially even reaching artificial superintelligence (ASI). The thorny question is often referred to as the human ceiling assumption. It goes like this. Is there a ceiling or ending point that confines how far human intellect can go? Or does human intellect extend indefinitely and nearly have infinite possibilities?
Let's talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Human Intellect As A Measuring Stick
Have you ever pondered the classic riddle that asks how high is up?
I'm sure that you have.
Children ask this vexing question of their parents. The usual answer is that up goes to the outer edge of Earth's atmosphere. After hitting that threshold, up continues onward into outer space. Up is either a bounded concept based on our atmosphere or it is a nearly infinite notion that goes as far as the edge of our expanding universe.
I bring this riddle to your attention since it somewhat mirrors an akin question about the nature of human intelligence:
In other words, the intelligence we exhibit currently is presumably not our upper bound. If you compare our intelligence with that of past generations, it certainly seems relatively apparent that we keep increasing in intelligence on a generational basis. Will those born in the year 2100 be more intelligent than we are now? What about being born in 2200? All in all, most people would speculate that yes, the intelligence of those future generations will be greater than the prevailing intelligence at this time.
If you buy into that logic, the up-related aspect rears its thorny head. Think of it this way. The capability of human intelligence is going to keep increasing generationally. At some point, will a generation exist that has capped out? The future generation represents the highest that human intellect can ever go. Subsequent generations will either be of equal human intellect, or less so and not more so.
The reason we want to have an answer to that question is that there is a present-time pressing need to know whether there is a limit or not. I've just earlier pointed out that AGI will be on par with human intellect, while ASI will be superhuman intelligence. Where does AGI top out, such that we can then draw a line and say that's it? Anything above that line is going to be construed as superhuman or superintelligence.
Right now, using human intellect as a measuring stick is hazy because we do not know how long that line is. Perhaps the line ends at some given point, or maybe it keeps going infinitely.
Give that weighty thought some mindful pondering.
The Line In The Sand
You might be tempted to assume that there must be an upper bound to human intelligence. This intuitively feels right. We aren't at that limit just yet (so it seems!). One hopes that humankind will someday live long enough to reach that outer atmosphere.
Since we will go with the assumption of human intelligence as having a topping point, doing so for the sake of discussion, we can now declare that AGI must also have a topping point. The basis for that claim is certainly defensible. If AGI consists of mimicking or somehow exhibiting human intelligence, and if human intelligence meets a maximum, AGI will also inevitably meet that same maximum. That's a definitional supposition.
Admittedly, we don't necessarily know yet what the maximum point is. No worries, at least we've landed on a stable belief that there is a maximum. We can then draw our attention toward figuring out where that maximum resides. No need to be stressed by the infinite aspects anymore.
Twists And Turns Galore
AI gets mired in a controversy associated with the unresolved conundrum underlying a ceiling to human intelligence. Let's explore three notable possibilities.
First, if there is a ceiling to human intelligence, maybe that implies that there cannot be superhuman intelligence.
Say what?
It goes like this. Once we hit the top of human intelligence, bam, that's it, no more room to proceed further upward. Anything up until that point has been conventional human intelligence. We might have falsely thought that there was superhuman intelligence, but it was really just intelligence slightly ahead of conventional intelligence. There isn't any superhuman intelligence per se. Everything is confined to being within conventional intelligence. Thus, any AI that we make will ultimately be no greater than human intelligence.
Mull that over.
Second, well, if there is a ceiling to human intelligence, perhaps via AI we can go beyond that ceiling and devise superhuman intelligence.
That seems more straightforward. The essence is that humans top out but that doesn't mean that AI must also top out. Via AI, we might be able to surpass human intelligence, i.e., go past the maximum limit of human intelligence. Nice.
Third, if there isn't any ceiling to human intelligence, we would presumably have to say that superhuman intelligence is included in that infinite possibility. Therefore, the distinction between AGI and ASI is a falsehood. It is an arbitrarily drawn line.
Yikes, it is quite a mind-bending dilemma.
Without some fixed landing on whether there is a human intelligence cap, the chances of nailing down AGI and ASI remain aloof. We don't know the answer to this ceiling proposition; thus, AI research must make varying base assumptions about the unresolved topic.
AI Research Taking Stances
AI researchers often take the stance that there must be a maximum level associated with human intellect. They generally accept that there is a maximum even if we cannot prove it. The altogether unknown, but considered plausibly existent limit, becomes the dividing line between AGI and ASI. Once AI exceeds the human intellectual limit, we find ourselves in superhuman territory.
In a recently posted paper entitled 'An Approach to Technical AGI Safety and Security' by Google DeepMind researchers Rohin Shah, Alex Irpan, Alexander Matt Turner, Anna Wang, Arthur Conmy, David Lindner, Jonah Brown-Cohen, Lewis Ho, Neel Nanda, Raluca Ada Popa, Rishub Jain, Rory Greig, Samuel Albanie, Scott Emmons, Sebastian Farquhar, Sébastien Krier, Senthooran Rajamanoharan, Sophie Bridgers, Tobi Ijitoye, Tom Everitt, Victoria Krakovna, Vikrant Varma, Vladimir Mikulik, Zachary Kenton, Dave Orr, Shane Legg, Noah Goodman, Allan Dafoe, Four Flynn, and Anca Dragan, arXiv, April 2, 2025, they made these salient points (excerpts):
You can see from those key points that the researchers have tried to make a compelling case that there is such a thing as superhuman intellect. The superhuman consists of that which goes beyond the human ceiling. Furthermore, AI won't get stuck at the human intellect ceiling. AI will surpass the human ceiling and proceed into the superhuman intellect realm.
Mystery Of Superhuman Intelligence
Suppose that there is a ceiling to human intelligence. If that's true, would superhuman intelligence be something entirely different from the nature of human intelligence? In other words, we are saying that human intelligence cannot reach superhuman intelligence. But the AI we are devising seems to be generally shaped around the overall nature of human intelligence.
How then can AI that is shaped around human intelligence attain superintelligence when human intelligence cannot apparently do so?
Two of the most frequently voiced answers are these possibilities:
The usual first response to the exasperating enigma is that size might make the difference.
The human brain is approximately three pounds in weight and is entirely confined to the size of our skulls, roughly allowing brains to be about 5.5 inches by 6.5 inches by 3.6 inches in respective dimensions. The human brain consists of around 86 billion neurons and perhaps 1,000 trillion synapses. Human intelligence is seemingly stuck to whatever can happen within those sizing constraints.
AI is software and data that runs across perhaps thousands or millions of computer servers and processing units. We can always add more. The size limit is not as constraining as a brain that is housed inside our heads.
The bottom line is that the reason we might have AI that exhibits superhuman intelligence is due to exceeding the physical size limitations that human brains have. Advances in hardware would allow us to substitute faster processors and more processors to keep pushing AI onward into superhuman intelligence.
The second response is that AI doesn't necessarily need to conform to the biochemical compositions that give rise to human intelligence. Superhuman intelligence might not be feasible with humans due to the brain being biochemically precast. AI can easily be devised and revised to exploit all manner of new kinds of algorithms and hardware that differentiate AI capabilities from human capabilities.
Heading Into The Unknown
Those two considerations of size and differentiation could also work in concert. It could be that AI becomes superhuman intellectually because of both the scaling aspects and the differentiation in how AI mimics or represents intelligence.
Hogwash, some exhort. AI is devised by humans. Therefore, AI cannot do better than humans can do. AI will someday reach the maximum of human intellect and go no further. Period, end of story.
Whoa, comes the retort. Think about humankind figuring out how to fly. We don't flap our arms like birds do. Instead, we devised planes. Planes fly. Humans make planes. Ergo, humans can decidedly exceed their own limitations.
The same will apply to AI. Humans will make AI. AI will exhibit human intelligence and at some point reach the upper limits of human intelligence. AI will then be further advanced into superhuman intelligence, going beyond the limits of human intelligence. You might say that humans can make AI that flies even though humans cannot do so.
A final thought for now on this beguiling topic. Albert Einstein famously said this: 'Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.' Quite a cheeky comment. Go ahead and give the matter of AI becoming AGI and possibly ASI some serious deliberation but remain soberly thoughtful since all of humanity might depend on what the answer is.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
12 minutes ago
- Fast Company
Meet vibe coding's nerdy but sane sibling
By now you've probably heard of 'vibe coding': creating software from scratch by prompting AI to generate the source code for you, instead of writing it yourself. The promise of spinning up real working apps from nothing but mushy human language is tantalizing—and companies such as Lovable and Replit have already ridden it to billion-dollar valuations. There's just one catch. For a wide swath of businesses with straightforward but mission-critical needs, vibe coding doesn't actually work. AI still hallucinates, which means the code it generates is often sloppy and occasionally downright malicious. It can't be trusted to be secure or consistent. No wonder that according to one major survey, 72% of professional software developers don't use vibe coding tools on the job. A New York-based startup called Aboard is pursuing a different approach. Call it 'vibe product management.' Instead of using AI to jump right into generating code (and holding on for dear life), business leaders can work with 'solution engineers' at Aboard who use AI to turbocharge the planning and discovery phases of enterprise software development—the lumbering, unsexy, but essential work of deciding what components to code and how to assemble them reliably. If vibe coding is like asking a robot architect for a summer house and watching it start to pour concrete for a pool you may never have asked for, Aboard's approach is all about making the robot focus on drawing up the blueprints first. It's still faster, but also saner.


Android Authority
12 minutes ago
- Android Authority
Worried about the Pixel 10 Pro XL benchmark controversy? Here's why you shouldn't be
Well, friends, the day has finally come: Google Pixel day. Today is the day that Google will officially reveal the Pixel 10 series after months of leaks, and it's a day we should all be excited about. Instead, a lot of Pixel fans are upset and complaining. Those complaints stem from a last-minute Pixel 10 Pro XL benchmark leak, supposedly showing just how fast Google's new Tensor G5 chip is. Assuming the benchmarks are legit, the good news is that the Tensor G5 is faster than the Tensor G4. However, the G5 still trails behind Qualcomm's Snapdragon 8 Elite — and that's what has some people furious. Redditors have called this 'pathetic,' 'sad,' 'a f***ing joke,' and so on. But I've got to be honest — I couldn't care less about any of this. And for most of you reading this, you shouldn't care either. What do you think of these leaked scores if true? 1395 votes I'd be happy with these scores 7 % It's okay but could be better 17 % I'd be disappointed 44 % I don't care about benchmarks 32 % Why you shouldn't read too closely into these benchmarks As we head into today's Made by Google event, the Tensor G5 chip is almost a bigger deal than the Pixel 10 phones themselves. We expect the G5 to be Google's first 3nm chip and the first Tensor chip manufactured by TSMC rather than Samsung Foundry. Those two 'firsts' should translate to numerous upgrades over the Tensor G4, performance being one of them. And as we see from these benchmarks, it looks like that's what we're getting. If we take these leaked benchmarks at face value, the Pixel 10 Pro XL has almost a 22% increase in single-core CPU performance compared to the Pixel 9 Pro XL, along with a staggering 46% increase in multi-core performance. That's an impressive year-over-year upgrade, and it's one I just can't be upset about. But that's clearly not how everyone feels. Whether it's the fact that these GeekBench scores are still lower than Snapdragon 8 Elite phones or that the Pixel 10 Pro XL's GPU scored lower than the Pixel 9 Pro XL, a lot of folks are already ready to write the Tensor G5 off even before they've used the chip themselves. Joe Maring / Android Authority I think there are two ways to view this, both of which make this complaining look a little ridiculous. On the one hand, let's say these benchmarks are 100% accurate. If that's the case, I struggle to see what the big deal is. I spent a lot of time with the Pixel 9 Pro last year (and more recently the Pixel 9a), and performance with the Tensor G4 chip on both phones is great. The UI is as smooth as butter, apps open and run quickly, and the few games I play run without a hitch. If I'm solely looking at benchmark numbers or closely analyzing frames per second, I'm sure I'd be a lot more disappointed with the G4. But for day-to-day performance with the way I use my phone, not once have I felt that the Tensor G4 is underpowered. Seeing that the Tensor G5 could be almost 50% faster for certain tasks is pretty incredible — regardless of how it compares to Qualcomm's chip. Benchmarks — especially for unreleased phones — never tell the whole story. The other way to look at this benchmark leak, which is perhaps the healthiest way, is that benchmarks — especially for unreleased phones — never tell the whole story. Benchmark results are one of the easiest things to fake, so wholeheartedly trusting leaked results like this is never a good idea. And even if the benchmarks are real, there's a very strong possibility that old drivers and/or unoptimized software are leading to these 'disappointing' results (which could very well explain the lower-than-expected GPU Antutu scores). Add all of this together, and it's hard to see this outrage as anything but complaining just for the sake of it. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. Find out more here. There are other reasons to be excited about the Tensor G5 Robert Triggs / Android Authority Looking beyond the benchmark debacle, it's also worth mentioning that performance is just one of the reasons to look forward to the Tensor G5. And, if you ask me, it's the least important. While the Tensor G5 should be a faster chip than the G4, it should also be more efficient — and that's what could matter the most. Although I've never had any serious performance issues with Tensor chips, I have had my fair share of complaints about battery life and thermal management, both of which could be resolved with more efficient silicon. We live in a world where flagship Qualcomm chips (when paired with a large enough battery) can easily last a day and a half or two days per charge — something I've never experienced with a Pixel. We also expect competing flagship chipsets to handle basic daily tasks without overheating, something Google's Tensor chips have always struggled with. It's these potential upgrades, rather than faster performance, that have me the most excited for the Tensor G5. I certainly won't say no to a faster phone, but if the G5 enables the Pixel 10 series to have genuinely great battery life and no thermal issues, that's what will make the most significant difference — and that's not something you'll see in any CPU or GPU benchmark. Of course, whether the G5 delivers on this potential remains to be seen until we get our hands on the Pixel 10 lineup, but that again reinforces my point here. One benchmark leak can only tell us so much about a phone. It doesn't reveal anything about real-world performance, battery life, thermals, etc. I can understand being disappointed about 'bad' benchmark results, but to be enraged about a leak like this just doesn't make sense. The good news is that the Pixel 10 is being announced today, so it won't be long now until we can do that testing ourselves to see how the Tensor G5 performs in the real world — and we can (hopefully) put this silly benchmark complaining behind us. Follow

Business Insider
13 minutes ago
- Business Insider
Raising $4.7 million in a week: What one founder learned after moving his AI startup from Germany to San Francisco
Raising money in Germany for his AI startup? Nearly impossible. Raising several million in a week in Silicon Valley? Easy. That's according to Kevin Wu, cofounder and CEO of Leaping AI, which builds voice AI agents for call centers, customer service, and other use cases. He founded the company in Germany in 2023 with Arkadiy Telegin, cofounder and CTO of Leaping AI. The company completed a round of funding earlier this year after participating in Y Combinator. Leaping AI raised $4.7 million in seed funding led by Nexus Venture Partners. Other investors included Y Combinator cofounder Paul Graham and Shopify COO Kaz Nejatian, as well as Ritual Capital, Pioneer Fund, Orange Collective, and the founders of the voice AI platform Cartesia. Wu, who is currently based in San Francisco, left his job as a consultant at Boston Consulting Group in Berlin to found the company. He said he was partly inspired by an experience he had years earlier as an intern at Amazon, when he was required to spend a day answering phones and talking to customers at a call center. "It was such an unrewarding job," he said, explaining why he thought it was ripe for disruption. Leaping AI's customers span industries, including travel, home services, health insurance, and real estate, and its voice agents currently handle 10,000 calls per day, the company said. For one client, a large travel company, Leaping AI said 50% of repetitive, booking-related calls can be handled without any help from a human. Their AI agents also have achieved customer satisfaction rates of over 90%, the company said. Leaping AI plans to use the seed funding to expand its product and go-to-market teams, enhance its agent capabilities, and scale to meet demand. Raising money in Germany vs. Silicon Valley Initially, the Leaping AI founders tried to raise money in Germany, where Wu grew up and attended school, but struggled. "It's so hard to found a company in Germany because there's no very early stage seed venture capital for technical founders with unproven businesses," he said. Then, at the end of 2024, Leaping AI was accepted to Y Combinator after being rejected twice and moved to San Francisco. "Our revenue doubled within essentially the first two months of being here. So we got more revenue in two months than one year in Germany," he said. Leaping AI said it recently surpassed $1 million in annual recurring revenue. When it came time to fundraise in Silicon Valley after being part of Y Combinator, Wu said he took 14 meetings a day in 30-minute blocks, back-to-back, for five days straight. By the end of the first week, he said they'd raised $4.7 million and had even more in offers. He said he canceled the second week of meetings they had planned. "Here, pretty much everybody said 'yes,'" Wu said. Wu said coming out of Y Combinator and having the backing of Graham made all the difference, adding it immediately expanded their network and made the startup attractive to investors. " Paul Graham is the Kobe Bryant of startups," he said. "If Paul Graham invests, usually people see that as a very good sign." Y Combinator has been investing heavily in AI. The startup accelerator's recent cohorts have been dominated by AI startups run by young founders. Y Combinator invests $500,000 in every company it accepts, and its alumni include Airbnb, Coinbase, and DoorDash. Leaping AI also benefited from having already existed for a year, Wu said, adding that they had more traction than some of the other companies in their Y Combinator cohort that were run by recent college graduates. He said that for computer science graduates in Germany like himself, making it to Silicon Valley is the dream, and he would encourage other founders to make it happen. If you can't get into Y Combinator, he said, founders can also make their own luck. "You could literally just fly over here for a month, get to know investors, try to get funding with the promise of you're moving to SF. And once you have investor money, it's a lot easier," he said. "I think YC is a good way to come to the US," he said. "But it's not the only way." Wu said now that Leaping AI is based in the States, it's actually having an easier time selling its product back home in Germany. "You're seen as having German roots and speaking their language, but you're a Silicon Valley company," he said, adding, "you're seen as setting innovation."