logo
#

Latest news with #RTX3090

Chinese AI team wins global award for replacing Nvidia GPU with industrial chip
Chinese AI team wins global award for replacing Nvidia GPU with industrial chip

South China Morning Post

time12-03-2025

  • Business
  • South China Morning Post

Chinese AI team wins global award for replacing Nvidia GPU with industrial chip

In a bold challenge to US giant Nvidia's dominance in artificial intelligence (AI) hardware, Chinese researchers have trained a cutting-edge video-generation model on an off-the shelf industrial chip – outperforming high-end GPUs in both speed and efficiency. Advertisement Their system, FlightVGM, recorded a 30 per cent performance boost and had an energy efficiency that was 4½ times greater than Nvidia's flagship RTX 3090 GPU – all while running on the widely available V80 FPGA chip from Advanced Micro Devices (AMD), another leading US semiconductor firm. The innovation earned top honours at the prestigious FPGA 2025 conference which concluded on March 1. The win marked the first time a mainland Chinese team had claimed the event's Best Paper Award, signalling a seismic shift in the global race to optimise AI hardware Developed by scientists from Shanghai Jiao Tong University, Tsinghua University and Beijing-based start-up Infinigence-AI, the model could redefine how industries deploy cost-effective, energy-efficient AI systems, from robotic controls to autonomous vehicles FPGAs or field-programmable gate arrays, are programmable semiconductor devices that allow post-manufacturing modifications to their circuitry and functionality. In contrast, conventional chips like CPUs or central processing units, GPUs or graphics processing units, and ASICs or application-specific integrated circuits have fixed functionalities once fabricated. Advertisement In video-generation and general computing, FPGAs and Nvidia GPUs each have distinct advantages. FPGAs offer customisable architecture tailored to specific applications, resulting in higher energy efficiency and lower latency. Meanwhile, Nvidia GPUs, known for their massive parallel computing power, excel in processing large-scale data and handling complex computational tasks. Building on previous research, the Chinese team developed FlightVGM, the first FPGA-trained video-generation AI model. Through innovations in data architecture and scheduling methods, FlightVGM achieved computational performance that could outpace GPUs.

Nvidia might never top the RTX 4090
Nvidia might never top the RTX 4090

Yahoo

time09-02-2025

  • Business
  • Yahoo

Nvidia might never top the RTX 4090

The RTX 4090 might be the best graphics card Nvidia has ever released, and we may never see a flagship quite on the same level ever again. There's no doubt the RTX 4090 is extremely powerful, but it's not raw power alone that made it the flagship to end all flagships. I mean, the new RTX 5090 is already faster, and I'm confident Nvidia will continue to release massive GPUs that cost thousands of dollars in the future. But the RTX 4090 remains a crowning achievement for Team Green, and an inflection point for graphics cards more broadly. Nvidia has maintained some sort of halo GPU for several generations, mostly in a bid to claim performance dominance over AMD. Those cards originally fell under the Titan umbrella, but Nvidia changed course with its Ampere generation, releasing the first 90-class GPU ever in the form of the RTX 3090. It's a Titan, but instead of being pushed into a corner for only enthusiasts with thousands of dollars to burn, it was part of the main range. The much more reasonably-priced RTX 3080 was considered the 'flagship' of the generation, but by bringing a Titan-class option into the main product stack, Nvidia was readjusting expectations. One generation later, the RTX 4090 was suddenly the 'flagship.' Of course, Nvidia made an RTX 4080, but it wasn't the GPU on every PC gamer's lips. The RTX 4090 was. In the course of one generation, Nvidia's flagship offering went from $700 to $1,600, more than doubling the price. Nvidia had to justify a price that it had never pushed its GPUs to in the past. And boy, did it justify the price increase. Unlike graphics cards traditionally of the Titan kin, the RTX 4090 actually provided a good value for the money. It was a better value than the RTX 3090, better than the RTX 3080, and even better than AMD's RX 6950 XT. This was a flagship that didn't accept the idea of diminishing returns. Even at $1,600, Nvidia was not only keeping pace with the price-to-performance ratio in the previous generation — it was exceeding it. It was something we had never seen before. Nvidia could claim dominance with cards like the RTX 3090 Ti, but you were forced to throw any ideas about value out the window. When the RTX 4090 was released, it was nearly 70% faster than the next fastest graphics card you could buy. That's an impressive generational uplift anywhere, let alone on a flagship GPU. Already, with the RTX 5090, we can see how much lower the generational uplift is. With Nvidia's latest flagship, you're looking at a boost of around 30%, which is a far cry from what Nvidia delivered with the RTX 4090. We're only one generation on, but the RTX 4090 feels like an anomaly compared to both past and current generations, and based on the direction of PC hardware innovation, we may never see a flagship that can deliver on the same level. Moore's Law. It's a concept that only Intel seems to be defending these days — it coined the term, after all — with Nvidia and now even AMD recognizing that it's coming to an end. Delivering double the transistor density for half of the price every 18 months hasn't been the reality of PC hardware for years, and now, the rate of innovation is so low that it's becoming too much to ignore. The concept of Moore's Law has been a north star for the PC industry, and it's served to get a disparate group of companies on board with a shared vision. Nvidia didn't need to invest billions in the next era of semiconductor manufacturing; TSMC was already doing it. Like clockwork, transistors got smaller and smaller, allowing companies like Nvidia to squeeze more and more of them on a graphics card without taking up extra space. Yes, even as recently as the RTX 4090, Nvidia was executing on the idea of Moore's Law. There were 28.3 billion transistors on the RTX 3090, with a density of 45.1 million per square millimeter. For the RTX 4090, Nvidia packed in 76.3 billion, and at more than triple the density — 125.3 million per square millimeter. Compare that leap now to the RTX 5090. It has a bump in transistors up to 92.2 billion, but a lower density at 122.9 million per square millimeter. It's not a surprise, either, as Nvidia is using the same TSMC N4 node for its RTX 50-series GPUs as it did with its RTX 40-series GPUs. It's the first time ever that Nvidia has used the same node across two different generations, and it's a telling sign of the times. The brute-force method of squeezing more transistors on a chip just doesn't work like it used to Nvidia can't deliver a generational uplift on the level of the RTX 4090 unless transistors get smaller, and that's becoming increasingly difficult to accomplish. If we do ever see a flagship that can lead the pack like the RTX 4090 has, it won't come from jumping down to a smaller node. Don't worry, we're not just going to get the same graphics card over and over again. Nvidia is already establishing solutions to increase performance, and I'm sure there will be even more in the future. The idea of a 'performance boost' just looks a little bit different than it used to. It's not surprising that Nvidia debuted DLSS 4 Multi-Frame Generation alongside RTX 50-series GPUs. Although Nvidia delivered a performance boost with the RTX 5090, that largely came as a function of a larger chip and more power compared to the RTX 4090. If you need evidence of that, just look at the RTX 5080. When scaling down to a more reasonable level of die size and power, Nvidia is only delivering a slight bump in performance, hoping to make up the deficit with AI-generated frames. That's the new idea of a performance boost. AI is the dynamic that breaks through the dead end of Moore's Law, for better or worse. Instead of just rendering every pixel faster, we'll render fewer pixels and make up the difference with AI. That happens through upscaling, through frame generation, and now even through multi-frame generation. I know the idea of 'fake' frames and upscaled images rubs some folks the wrong way, and I get it. When graphics cards cost thousands of dollars, you'd hope for more than just software improvements. But with innovations in process slowing to a crawl, those are the routes where performance improvements will come from. If you're holding out hope for another RTX 4090-scale improvement in raw performance, you're going to be disappointed. There may be some massive leap forward in performance in the future, but it won't look the same as what we saw with the RTX 4090. As much as I'm rooting for more powerful graphics cards for years to come — regardless of if they come from Nvidia or not — it's important to reset expectations in the meantime.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store