Nvidia might never top the RTX 4090
The RTX 4090 might be the best graphics card Nvidia has ever released, and we may never see a flagship quite on the same level ever again.
There's no doubt the RTX 4090 is extremely powerful, but it's not raw power alone that made it the flagship to end all flagships. I mean, the new RTX 5090 is already faster, and I'm confident Nvidia will continue to release massive GPUs that cost thousands of dollars in the future. But the RTX 4090 remains a crowning achievement for Team Green, and an inflection point for graphics cards more broadly.
Nvidia has maintained some sort of halo GPU for several generations, mostly in a bid to claim performance dominance over AMD. Those cards originally fell under the Titan umbrella, but Nvidia changed course with its Ampere generation, releasing the first 90-class GPU ever in the form of the RTX 3090. It's a Titan, but instead of being pushed into a corner for only enthusiasts with thousands of dollars to burn, it was part of the main range. The much more reasonably-priced RTX 3080 was considered the 'flagship' of the generation, but by bringing a Titan-class option into the main product stack, Nvidia was readjusting expectations.
One generation later, the RTX 4090 was suddenly the 'flagship.' Of course, Nvidia made an RTX 4080, but it wasn't the GPU on every PC gamer's lips. The RTX 4090 was. In the course of one generation, Nvidia's flagship offering went from $700 to $1,600, more than doubling the price. Nvidia had to justify a price that it had never pushed its GPUs to in the past.
And boy, did it justify the price increase. Unlike graphics cards traditionally of the Titan kin, the RTX 4090 actually provided a good value for the money. It was a better value than the RTX 3090, better than the RTX 3080, and even better than AMD's RX 6950 XT. This was a flagship that didn't accept the idea of diminishing returns. Even at $1,600, Nvidia was not only keeping pace with the price-to-performance ratio in the previous generation — it was exceeding it.
It was something we had never seen before. Nvidia could claim dominance with cards like the RTX 3090 Ti, but you were forced to throw any ideas about value out the window. When the RTX 4090 was released, it was nearly 70% faster than the next fastest graphics card you could buy. That's an impressive generational uplift anywhere, let alone on a flagship GPU.
Already, with the RTX 5090, we can see how much lower the generational uplift is. With Nvidia's latest flagship, you're looking at a boost of around 30%, which is a far cry from what Nvidia delivered with the RTX 4090. We're only one generation on, but the RTX 4090 feels like an anomaly compared to both past and current generations, and based on the direction of PC hardware innovation, we may never see a flagship that can deliver on the same level.
Moore's Law. It's a concept that only Intel seems to be defending these days — it coined the term, after all — with Nvidia and now even AMD recognizing that it's coming to an end. Delivering double the transistor density for half of the price every 18 months hasn't been the reality of PC hardware for years, and now, the rate of innovation is so low that it's becoming too much to ignore.
The concept of Moore's Law has been a north star for the PC industry, and it's served to get a disparate group of companies on board with a shared vision. Nvidia didn't need to invest billions in the next era of semiconductor manufacturing; TSMC was already doing it. Like clockwork, transistors got smaller and smaller, allowing companies like Nvidia to squeeze more and more of them on a graphics card without taking up extra space.
Yes, even as recently as the RTX 4090, Nvidia was executing on the idea of Moore's Law. There were 28.3 billion transistors on the RTX 3090, with a density of 45.1 million per square millimeter. For the RTX 4090, Nvidia packed in 76.3 billion, and at more than triple the density — 125.3 million per square millimeter. Compare that leap now to the RTX 5090. It has a bump in transistors up to 92.2 billion, but a lower density at 122.9 million per square millimeter.
It's not a surprise, either, as Nvidia is using the same TSMC N4 node for its RTX 50-series GPUs as it did with its RTX 40-series GPUs. It's the first time ever that Nvidia has used the same node across two different generations, and it's a telling sign of the times. The brute-force method of squeezing more transistors on a chip just doesn't work like it used to
Nvidia can't deliver a generational uplift on the level of the RTX 4090 unless transistors get smaller, and that's becoming increasingly difficult to accomplish. If we do ever see a flagship that can lead the pack like the RTX 4090 has, it won't come from jumping down to a smaller node.
Don't worry, we're not just going to get the same graphics card over and over again. Nvidia is already establishing solutions to increase performance, and I'm sure there will be even more in the future. The idea of a 'performance boost' just looks a little bit different than it used to.
It's not surprising that Nvidia debuted DLSS 4 Multi-Frame Generation alongside RTX 50-series GPUs. Although Nvidia delivered a performance boost with the RTX 5090, that largely came as a function of a larger chip and more power compared to the RTX 4090. If you need evidence of that, just look at the RTX 5080. When scaling down to a more reasonable level of die size and power, Nvidia is only delivering a slight bump in performance, hoping to make up the deficit with AI-generated frames.
That's the new idea of a performance boost. AI is the dynamic that breaks through the dead end of Moore's Law, for better or worse. Instead of just rendering every pixel faster, we'll render fewer pixels and make up the difference with AI. That happens through upscaling, through frame generation, and now even through multi-frame generation.
I know the idea of 'fake' frames and upscaled images rubs some folks the wrong way, and I get it. When graphics cards cost thousands of dollars, you'd hope for more than just software improvements. But with innovations in process slowing to a crawl, those are the routes where performance improvements will come from. If you're holding out hope for another RTX 4090-scale improvement in raw performance, you're going to be disappointed.
There may be some massive leap forward in performance in the future, but it won't look the same as what we saw with the RTX 4090. As much as I'm rooting for more powerful graphics cards for years to come — regardless of if they come from Nvidia or not — it's important to reset expectations in the meantime.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Wall Street Journal
an hour ago
- Wall Street Journal
Don't Buy the Top Stock
U.K. Prime Minister Keir Starmer and Nvidia CEO Jensen Huang on stage Monday at London Tech Week's opening event. (Tolga Akmen/EPA-EFE/shutterstock/Shutterstock)


Tom's Guide
4 hours ago
- Tom's Guide
Score! RTX 5070 Ti OLED gaming laptop is $450 off for a limited time
Summer is upon us, and with it comes the first major discounts I've seen on gaming laptops packing the latest Nvidia GeForce RTX 50-series graphics cards. The best deal I've seen so far is this Lenovo Legion Pro 7i with RTX 5070 Ti for $2,399 at B&H, which knocks nearly $500 off the asking price for this high-end gaming laptop with one of the newest Nvidia GeForce RTX 50-series GPUs you can get. This Lenovo Legion Pro 7i is a cutting-edge gaming laptop thanks to its Nvidia GeForce RTX 5070 Ti GPU, the Intel Core i9-275HX CPU, 32GB of RAM and 2TB of storage. That's more than enough power to make all your favorite games run great on the 16-inch 1600p 240Hz OLED display. The Nvidia GeForce RTX 5070 Ti hit the market just a few months ago, and it looks to be the ideal value offering in the RTX 50-series lineup right now. And while it's not the highest-end 50-series card, it offers more than enough muscle to run even the best PC games well on this machine. Plus, the laptop itself is a well-designed 16-inch gaming notebook that's equally good for gaming or productivity work. If you read our Lenovo Legion Pro 7i review, you can see how thin and elegant it is in person, along with shots of the plentiful port array and test results, which prove why it ranks among the best gaming laptops on the market. That 16-inch (2560 x 1600) 240Hz OLED display looks lovely to boot, and it will make all your favorite games and movies look fantastic—and since it supports HDR and Dolby Vision, you can enjoy your media to the fullest. Of course, we haven't had a chance to test this RTX 5070 Ti version yet, but it's sure to outperform its predecessors and run games well thanks to the power of Nvidia's latest laptop GPUs. Factor in the 32GB of DDR5 RAM and 2TB of SSD storage, and you see why you don't have to stress about this laptop running out of RAM or room for your favorite games anytime soon. With Wi-Fi 7 and a full, comfy keyboard, you can cart this beast to the coffee shop when you want to work, and when you're done, you can lug it back to the living room and play PC games on your big screen via the HDMI 2.1, Thunderbolt 4 or USB-C ports. You also get USB-A and RJ-45 Ethernet ports, so you can count on being able to plug in old accessories and jack into wired Internet when gaming online. Admittedly, this is a hefty beast that weighs over six pounds, so you'll probably want to keep it on your desk or coffee table most of the time. But that's true of most gaming laptops, and for my money, this is the best deal on an RTX 50-series machine I've seen all month.
Yahoo
4 hours ago
- Yahoo
Don't Let the Cult of Price Hold Crypto Back
Cryptocurrency is too often viewed through the narrow lens of price. The dominant narrative surrounding Bitcoin, Ethereum, and the broader crypto market has become fixated on one idea: numbers go up. Did Bitcoin break $100,000? Did Ethereum double in a month? Is this altcoin going to the moon? Financial media, X pundits, and even crypto advocates routinely reduce an entire technological revolution to a speculative race to ever-higher prices. But this is like evaluating Apple or Nvidia solely by their stock movements while ignoring the iPhone or the GPUs powering AI infrastructure. It's a superficial way of thinking — and in crypto, it's also dangerous. In traditional markets, value is ultimately grounded in usage. The more products a company sells, the more revenue it generates. The more users it retains, the stronger its network effect. Apple isn't a $3 trillion company just because its stock price went up; it's because over a billion people use its ecosystem daily. Nvidia didn't become a Wall Street darling by sheer momentum; it built the most essential chips of the AI age. Stock price follows product-market fit. In crypto, this principle is often inverted — price comes first, and everything else becomes secondary or is this philosophy more deeply ingrained than in what might be called Saylorism — the ideology promoted by MicroStrategy's Michael Saylor, the loudest evangelist for Bitcoin-as-collateral. Under this worldview, the core utility of Bitcoin isn't transacting, building, or innovating — it's simply holding. You buy Bitcoin, never sell, borrow against it, repeat. The usage is the hoarding. Bitcoin is not a currency or platform under Saylorism — it's a speculative vault for value, designed to appreciate forever and justify more borrowing. In essence, every company becomes a leveraged Bitcoin fund, building its capital structure around a single bet: that the number always goes up. This is a radical departure from the logic that underpins healthy businesses. Traditional firms grow by creating value for others, through products, services, and infrastructure. Under Saylorism, value is internalized, circular, and ultimately recursive: you buy more Bitcoin because it's going up, which makes it go up, which justifies buying more. It resembles a corporate Ponzi mindset, not in legal terms, but in structural dynamics, where external adoption matters less than internal leverage. The market doesn't need new users, it just needs existing holders to keep believing. Compare that to Ethereum, the second-largest cryptocurrency by market cap, which has taken a different path. While Ethereum is also subject to the gravitational pull of price speculation, and no one would argue that 'number goes up' doesn't matter; its value proposition is fundamentally rooted in usage. ETH is not just a store of value; it is the fuel of an economy. It powers decentralized applications, settles billions in stablecoin transactions, tokenizes real-world assets, mints NFTs, facilitates decentralized finance, and supports governance. ETH has demand because the network has demand. The more people use Ethereum, the more ETH is needed. And the more ETH is burned through transaction fees, the more supply becomes constrained. Price here reflects activity, not just belief. This distinction is profound. Ethereum's growth is tied to its functionality, to what it enables for users and developers. It resembles a traditional business more than a vault. It's like Amazon in the early 2000s: difficult to value by conventional metrics but serving a growing ecosystem. The difference between these two models–Bitcoin as gold and Ethereum as infrastructure–has sparked endless debate over whether they're even in competition. Some argue they're entirely different species: Bitcoin is a monetary metal; Ethereum is a decentralized world computer, perhaps likened to digital oil. It's fair to ask: what's ultimately more valuable, the gold you keep or the dollar you spend? Bitcoin's value depends on people holding it. Ethereum's value depends on people using it. Both are succeeding, but the paths are not the same. If cryptocurrency is to evolve beyond its speculative adolescence, it must shift away from price obsession and toward utility obsession. This means asking harder questions: What is this protocol used for? Who depends on it? What problem does it solve? Valuation must come from participation, not just price action. A blockchain that delivers real-world utility for finance, identity, coordination, or computation deserves appreciation. But it must earn it through adoption, not ideology. What if, instead of competing, Bitcoin and Ethereum found common ground and worked together? That's where the opportunity emerges: Ethereum serves as the most robust gateway for Bitcoin holders looking to access the broader world of decentralized finance. No network rivals Ethereum in terms of DeFi's depth and maturity. By converting BTC into Ethereum-compatible assets, holders can engage in a dynamic ecosystem of lending, staking, and yield generation, turning dormant Bitcoin into active, value-producing capital. Platforms like Aave, Lido, Ethena, and Maker enable BTC to participate in ways that static holding simply can't. The outcome? Mutual benefit: Ethereum attracts more liquidity, while Bitcoin gains much-needed utility. It's a powerful synergy that amplifies the strengths of both networks. Cryptocurrency is not just a dumb financial asset It's programmable money, digital property, frictionless transactions, decentralized coordination, and trustless finance. It's a reimagining of the internet's economic layer. But its long-term success depends on moving past the dopamine of daily price charts. Because in the end, the most valuable technologies aren't the ones with the flashiest tickers; they're the ones that get used. And usage, not hoarding, is what builds lasting in to access your portfolio