logo
#

Latest news with #TeamGreen

Nvidia N1X CPU appears in new benchmark — but it doesn't show its true potential
Nvidia N1X CPU appears in new benchmark — but it doesn't show its true potential

Tom's Guide

time20 hours ago

  • Tom's Guide

Nvidia N1X CPU appears in new benchmark — but it doesn't show its true potential

Nvidia's rumored N1X chip has recently shown up on Geekbench, giving use a sneak peek at its specs that appear to match an RTX 5070 desktop GPU. Now, the Arm-based CPU has popped up in yet another benchmark. The N1X CPU has been spotted in a FurMark benchmaking tool (via VideoCardz), with results showing its GPU capabilities. Codenamed "NVIDIA JMJWOA," it received a OpenGL score of 4,286, but it wasn't at its full GPU usage. According to the listing, the N1X used 63% of its maximum GPU capabilities, with its result putting it well under an RTX 5060 desktop graphics card. However, seeing as the previous Geekbench benchmark showed it coming with a 20-core CPU and 6,144 CUDA cores, the same number as an RTX 5070 desktop GPU, it's sure to offer far more performance power. Interestingly, the listing shows that Nvidia, or a manufacturer testing the chip for upcoming PCs, is testing the Arm-based CPU on Windows 11. This simply means that the N1X SoC will run on Windows — as many have expected. While this benchmark doesn't show just how well the N1X will perform (similar to the Geekbench result), it does give us yet another tease that Nvidia's anticipated N1X may be closer than we think, seeing as benchmarks popping up is a good sign that it's approaching a release. There's been a lot of back and forth with Team Green's rumored Windows-on-Arm CPU, with it initially expected to be announced at Computex 2025. It was then tipped to be delayed until late 2026, due to issues with the silicon, but now it's looking like it may arrive somewhat sooner. According to reports, Nvidia's N1X CPU has been delayed until early 2026, and it's apparently due to delays on Microsoft's next-gen Windows OS (possibly Windows 12). Whatever the case, this means the chip could be announced at CES 2026, which is when Nvidia often delivers big news — like its RTX 50-series GPU lineup. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Of course, since the chip has yet to be officially announced, we won't know for sure until it happens. In the meantime, check out our thoughts on why the RTX 5090 isn't best used for living room PC gamers. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Nvidia RTX 50 SUPER GPUs may arrive sooner than you think — even before CES 2026
Nvidia RTX 50 SUPER GPUs may arrive sooner than you think — even before CES 2026

Tom's Guide

time2 days ago

  • Tom's Guide

Nvidia RTX 50 SUPER GPUs may arrive sooner than you think — even before CES 2026

Nvidia RTX 50 SUPER GPUs are rumored to be in the works, which will hopefully (and finally) fix the lack of video memory on its latest graphics cards. Now, it appears we may see them arrive soon — even before the start of 2026. According to news outlet TweakTown, sources claim the Nvidia GeForce RTX 50 Super series is set to launch in Q4 2025, or sometime during the holiday season. This means we could see an RTX 5080 SUPER, 5070 Ti SUPER and 5070 SUPER arrive before their expected announcement during CES 2026. This puts the release windows between October and December, despite CES 2026 kicking off at the start of January 2026. Team Green's SUPER models generally launch around 12 months after the release of the base models, so this launch window does line up. However, at under 12 months, it would be the earliest launch of an RTX SUPER GPU, seeing as the RTX 5080 launched on January 30, while the RTX 5070 Ti was released in February and the RTX 5070 came to shelves in early March. If accurate, we'll see stronger RTX 50 SUPER cards before the end of the year, with the RTX 5080 SUPER expected to be the first to arrive. This also falls in line with a previous rumor suggesting RTX 5080 SUPER and 5070 SUPER GPUs would arrive in 2025, as per Moore's Law is Dead. Thanks to reliable leaker Kopite7kimi, we have a hint of the rumored specs the RTX 5080, 5070 Ti and 5070 SUPER will deliver. While there's an expected increase in CUDA Cores (only with the RTX 5070 SUPER) and TGP, the real draw is the boost in video memory (VRAM). One general complaint in Nvidia's latest graphics cards is that they lack enough VRAM, especially when it comes to the 12GB GDDR7 VRAM in the RTX 5070. However, the SUPER series is expected to deliver a big jump in video memory, Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Here's a look at the rumored specs of the RTX 5080, 5070 Ti and 5070 SUPER, and how much of a boost these GPUs offer over the base RTX 50-series counterparts. GPU RTX 5080 SUPER RTX 5080 RTX 5070 Ti SUPER RTX 5070 Ti RTX 5070 SUPER RTX 5070 CUDA Cores 10752 10752 8960 8960 6400 6144 Video Memory 24GB GDDR7 16GB GDDR7 24GB GDDR7 16GB GDDR7 18GB GDDR7 12GB GDDR7 TGP 415 Watts 360 Watts 350 Watts 300 Watts 275 Watts 250 Watts This would allow more room for higher resolutions (like 4K) in demanding AAA titles, path tracing and more, seeing as Cyberpunk 2077 with path tracing in overdrive mode can push 16GB. Pricing is still up in the air, although the previous RTX 40 SUPER series launched at a cheaper price than their base alternatives, so hopefully, Nvidia follows that tradition. That said, seeing as the price of RTX 50-series GPUs has gone beyond MSRP, and with the rumored release date coming soon, this may not be the case. We'll know more closer to the RTX 50 SUPER series launch date, but for now, check out our thoughts on the RTX 5060 Ti. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Nvidia N1X CPU: Everything we know so far
Nvidia N1X CPU: Everything we know so far

Tom's Guide

time20-07-2025

  • Tom's Guide

Nvidia N1X CPU: Everything we know so far

Nvidia is the undisputed leader of the GPU market (whether you like it or not), with its RTX 50-series graphics cards making waves this year, but Team Green looks to be putting its hat in the ring of another sector, as a new CPU may be on the horizon. Rumors have been swirling of a Nvidia N1X and N1 Arm-based CPUs that would be made for desktops and laptops, respectively. While Nvidia has already announced a new Arm-based CPU, the N1-series chips are set to be for consumers. Believed to be made in partnership with MediaTek, not only does this mean Nvidia will have a stake in PCs in a whole new way, but as reports have pointed out, it could lead to slimmer, more powerful gaming laptops, too. While Nvidia may have GPU and AI markets in its pocket, its N1X and N1 System on Chips (SoC) may prove to shake up the competition in Intel, AMD, Qualcomm and Apple's offerings. It may be a while before we see Nvidia's N1X and N1 CPUs arrive, and there's still a lot to learn, but the rumor mill has been churning out plenty on these chips. Let's dive into what we know so far. The rumored launch of Nvidia's N1-series CPU has been all over the place, as not too long ago, many believed the chips would be here by now. However, it's looking like we may have to wait at least a year until we see them arrive. Initially, Nvidia and MediaTek's Arm-based CPU was rumored to be announced at Computex 2025, with the tech giant expected to be gearing up to show off its smaller GB10 Blackwell chip in the Arm SoC coming to laptops. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. As you can tell, this didn't come to be, as it seems Nvidia wasn't ready to officially announce its chips. Many, including Moore's Law is Dead, believed it would arrive in late 2025 or early 2026, which would be in time for CES 2026, but it may turn out to be later than we thought. Now, it's been reported that the Nvidia N1X Arm CPU has been delayed until late 2026. As noted by SemiAccurate, Nvidia faced problems that caused a roadblock in the CPU arriving in early 2026. While this was reportedly handled, the new chip is now rumored to be suffering from another hurdle. While the report doesn't detail the specific problem with Nvidia's chip, sources state that the chip has been hit with problems that require engineers to make design changes to the silicon. Due to this, the SoC is now believed to be coming later in 2026. With Nvidia's track record of announcements, it could end up being at CES 2027 in January. For now, of course, this is all up in the air. But with rumors indicated delays, it's likely it will be a while before we see any mention of a new CPU from Nvidia. So, what kind of performance can we expect the Nvidia's N1-series chips to deliver? According to leaked benchmarks, we could see some big performance gains in ultraportable laptops. We've heard that the N1-series chip will be based on a GB10 Superchip, found in Nvidia's announced Project DIGITS AI supercomputer (now known as DGX Spark) for desktops. For the laptop version, which is set to be the N1 SoC, it may be a cut-down version of GB10, with some combination of a Blackwell GPU and a MediaTek CPU. That said, there's reason to believe it could use a GB206 model. Either way, it's looking to leverage the power of an RTX GPU, with these Blackwell-based GPUs being used in RTX 5060 Ti or RTX 5060 graphics cards. But the real kicker here is that this N1 chip will reportedly deliver the same performance of an RTX 4070-equipped laptop, but with far better energy efficiency, according to Taiwanese outlet UDN. For a CPU that delivers an integrated GPU with that kind of power, along with improvements to power efficiency (so possibly longer battery life), is already a good sign that Team Green's chip will be worth waiting for. But the rumors continue, as the N1 chip is expected to use 65W power to match the performance of a 120W RTX 4070 gaming laptop, and another source suggesting the chip would offer a TDP (Thermal Design Power) of 80W to 120W. According to ComputerBase Nvidia and MediaTek's chip may only have 8 or 12 CPU cores instead of 20. Benchmark leaks of the Nvidia's GB10 Arm superchip (via Notebookcheck) suggest single-core performance reaching 2,960 and multicore at 10,682. Due to the delay, it's only guesswork if these are the benchmarks (or even specs) that will arrive, as for now, these Geekbench results put it behind Apple's M4 Max chips. While it's believed the N1X chip is for desktop and the N1 is for laptops, it's looking likely that the latter will be primed for gaming laptops. And reports even suggest the first gamer-focused notebooks that will be getting them. According to the UDN report, Dell's gaming brand Alienware will be among the first to launch new gaming laptops featuring the Nvidia and MediaTek CPU. That means we could see fresh Alienware notebooks that are slimmer and offer better battery life, if rumors about Nvidia and MediaTek's chip are accurate — not unlike the newly designed Alienware 16 Aurora lineup. If rumors are accurate, Nvidia's Arm-based SoC is set to bolster ultraportable gaming laptops (and possibly PC gaming handhelds) with better power efficiency, which hopefully translates to greater battery life in gaming notebooks. We've seen Arm chips in action before, with Snapdragon X Elite laptops impressing with their long battery life and fast speeds. We've even tested Snapdragon X Elite PCs for gaming, and while impressive, they aren't quite built for demanding titles. With Nvidia's own chip sporting its GPU tech, however, gaming on machines with this chip could see major performance gains, especially if it uses some form of DLSS 4 and its Multi Frame Generation tech. But there's already some competition heating up, and that's from two heavy hitters in the laptop market. For one, the AMD Strix Halo APU already delivers close to RTX 4060 desktop GPU power, and Qualcomm's Snapdragon X2 Series chip is set to arrive soon. It's still early days for the Nvidia N1X Arm-based CPU, as it isn't even certain it may release. We have an idea of what to expect, especially when it comes to the power the N1-series chip for laptop may deliver, but all this could change if it doesn't arrive until next year. Only time will tell when we see Nvidia's N1X Arm-based CPU arrive, and whether its the CPU for consumers we've been expecting. But if it comes from Team Green, we should expect to see a boost in ultraportable laptops, at the very least, along with a touch of AI for greater power efficiency management.

Nvidia wants to make 8GB GPUs great again with AI texture compression — but I'm not convinced
Nvidia wants to make 8GB GPUs great again with AI texture compression — but I'm not convinced

Tom's Guide

time17-07-2025

  • Tom's Guide

Nvidia wants to make 8GB GPUs great again with AI texture compression — but I'm not convinced

If you're annoyed by just getting 8GB of video memory (VRAM) on your Nvidia RTX 5060 Ti, RTX 5060 or RTX 5050 GPU, there may be a fix coming. And just like a lot of Team Green's work, it's all about AI. In 2025, when plenty of games are requiring more than this from the jump, it's simply not enough (and PC gamers are letting Nvidia and AMD know with their wallets). Which is why Nvidia is looking to neural trickery — it's bread and butter with the likes of DLSS 4 and multi-frame gen. You may already know of Neural Texture Compression (or NTC), which is exactly what it says on the tin: taking those detailed in-game textures and compressing them for efficiency of loading and frame rate. As WCCFTech reports, NTC has seemingly taken another giant step forward by taking advantage of Microsoft's new Cooperative Vector in DirectX Raytracing 1.2 — resulting in one test showing an up-to-90% reduction in VRAM consumption for textures. To someone who is always wanting to make sure people get the best PC gaming bang for their buck, this sounds amazing. But I'm a little weary for three key reasons. As you can see in tests run by Osvaldo Pinali Doederlein on X (using a prerelease driver), this update to make the pipeline of loading textures more efficient with AI is significant. Texture size dropped from 79 MB all the way to just 9 MB — dropping the VRAM consumption by nearly 90%. How does it perform? Disabling v-sync, RTX 5080, demo at the startup position: (explained next tweet)Default: 2,350fps / 9.20MBNo FP8: 2,160fps / 9.20MBNo Int8: 2,350fps / 9.20MBDP4A: 1,030fps / 9.14MBTranscoded: 2,600fps / 79.38MBJuly 15, 2025 Just like DLSS 4 and other technologies extracting a higher frame rate and better graphical fidelity out of RTX 50-series GPUs, NTC requires developers to code it in. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. And while Nvidia is one of the better companies in terms of game support for its AI magic (so far over 125 games support DLSS 4), it's still a relatively small number when you think of the many thousands of PC titles that launch every year. Of course, this is not a burn on Doederlein here. This testing is great! But it is one example that doesn't take into account the broader landscape of challenges that are faced in a game — a test scene of a mask with several different textures isn't the same as rendering an entire level. So while this near-90% number is impressive nonetheless, when put to a far bigger challenge, I anticipate that number will be much lower on average. But when it comes to 8GB GPUs, every little bit helps! So yes, on paper, Nvidia's NTC could be the savior of 8GB GPUs, and it could extract more value from your budget graphics card. But let's address the elephant in the room — graphics cards with this low amount of video memory have been around for years, games in 2025 have proven that it's not enough and neural texture compression looks to me like a sticking plaster. I don't want to ignore the benefits here, though, because any chance to make budget tech even better through software and AI is always going to be a big win for me. But with the ever-increasing demands of developers (especially with Unreal Engine 5 bringing ever-more demanding visual masterpieces like The Witcher 4 to the front), how far can AI compression really go? Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

The RTX 5090 is the best graphics card I've ever owned — but there's a catch for living room PC gamers
The RTX 5090 is the best graphics card I've ever owned — but there's a catch for living room PC gamers

Tom's Guide

time14-07-2025

  • Tom's Guide

The RTX 5090 is the best graphics card I've ever owned — but there's a catch for living room PC gamers

Just as Terminator 2: Judgment Day predicted back in ye olden days of 1991, the future belongs to AI. That could be a problem for Nvidia RTX 50-series GPUs, even when it comes to the best consumer graphics card money can buy. I was 'fortunate' enough to pick up an Nvidia GeForce RTX 5090 a couple of months ago. I use those semi-joking apostrophes because I merely had to pay $650 over MSRP for the new overlord of GPUs. Lucky me. Before you factor in the 5090's frame-generating AI voodoo (which I'll get to), it's important to give credit to Team Green for assembling an utter beastly piece of silicon. Around 30% more powerful than the RTX 4090 — the previous graphics card champ — there's no denying it's an astonishing piece of kit. Whether you're gaming on one of the best TVs at 120 FPS or one of the best gaming monitors at 240 fps and above, the RTX 5090 has been designed for the most ludicrously committed hardcore gamers. And wouldn't you know it? I just happen to fall into this aforementioned, horribly clichéd category. So I have a PC similar to the build our lab tester Matt Murray constructed (he even posted a handy how-to on building a PC) — packing the 5090, AMD Ryzen 7 9800X3D, and 64GB DDR5 RAM on a Gigabyte X870 Aorus motherboard. In terms of the screens I play on, I have two. For the desk, I've got an Samsung Odyssey G9 OLED with a max 240Hz refresh rate, but most of the time, I'll be in living room mode with my LG G3 OLED's max 120Hz refresh rate. The main selling point of Nvidia's latest flagship product is DLSS 4's Multi Frame Generation tech. Taking advantage of sophisticated AI features, Nvidia's RTX 50 cards are capable of serving up blistering frame rates that simply can't be achieved through brute force hardware horsepower. Multi Frame Generation — and I promise that's the last time I capitalize Team Green's latest buzz phrase — feels like the biggest (and most contentious) development to hit the PC gaming scene in ages. The tech has only been out for a few months and there are already over 100 titles that support Nvidia's ambitious AI wizardry. How does it work? Depending on the setting you choose, an additional 1-3 AI-driven frames of gameplay will be rendered for every native frame your GPU draws. This can lead to colossal onscreen FPS counts, even in the most demanding games. Doom: The Dark Ages, Cyberpunk 2077, Indiana Jones and the Great Circle and Half-Life 2 RTX — some of the most graphically intense titles around can now be played at incredibly high frame rates with full ray tracing engaged. That's mainly thanks to multi frame generation. So I got into the games (turn off Vsync for the best results). For more specific context, these figures were taken from Doom's Forsaken Plain level, Indy's Marshall College section during a particularly challenging path traced scene, driving around downtown Night City in Cyberpunk, and Gorden's mesmerizing new take on Ravenholm. All games tested at 4K (Max settings, DLSS Balanced) Cyberpunk 2077 Doom: The Dark Ages Indiana Jones and the Great Circle Half-Life 2 RTX demo Frame gen off (Average frame rate / latency) 58 FPS / 36-47 ms 95 FPS / 37-48 ms 85 FPS / 33-40 ms 75 FPS / 26-3 ms Frame gen x2 (Average frame rate / latency) 130 FPS / 29-42 ms 160 FPS / 51-58 ms 140 FPS / 35-46 ms 130 FPS / 29-42 ms Frame gen x3 (Average frame rate / latency) 195 FPS / 37-52 ms 225 FPS / 54-78 ms 197 FPS / 43-53 ms 195 FPS / 37-52 ms Frame gen x4 (Average frame rate / latency) 240 FPS / 41-60 ms 270 FPS / 56-92 ms 243 FPS / 44-57 ms 240 FPS / 41-60 ms These are ludicrous frame rates — limited only by either my LG G3 OLED's max 120Hz refresh rate, or even the sky high 240Hz on my Samsung Odyssey G9 OLED in a couple circumstances. There is a catch, though, which goes back to the ways that I play. Despite my frame rate counter showing seriously impressive numbers, the in-game experiences often don't feel as smooth as I expected. As much as I've tried to resist, I've become increasingly obsessed with the excellent Nvidia app (and more specifically) its statistics overlay while messing around with multi frame gen of late. These stats let you monitor FPS, GPU and CPU usage, and most crucially for me, latency. Also known as input lag, latency measures the time it takes a game to register the press of a button on one of the best PC game controllers or the click of a key/mouse in milliseconds. If your latency is high, movement is going to feel sluggish, regardless of how lofty your frame rate is. And that situation is compounded on my TV. The high frame rate is glorious on my monitor, but when locked to 120Hz, you don't get the perceived smoother motion of those additional frames — creating a disconnect that makes that latency a bit more noticeable. If you own one of the best gaming PCs and want to enjoy a rich ray traced experience with acceptable input lag at responsive frame rates on your TV, my advice would be to aim for the frame gen level that is as close to your maximum refresh rate as possible. For all the games I tested, that would be 2x. At this level, I find latency hovers around the mid 30s but never exceeds 60 ms, which feels as snappy in that kind of living room gaming setup. Crank up the multi frame gen set to either x4 or x3 setting, and there's a depreciation of what you get here, as the latency becomes more visibly prevalent at the restricted refresh rate using one of the best gaming mice. Flip to a 240Hz monitor, however, and the difference is night and day, as the latency remains at a responsive level alongside those AI-injected frames for a buttery smooth experience. And now, we've got to talk about path tracing — it's already blowing minds in Doom: The Dark Ages, and it's prevalent in the likes of Cyberpunk and Doctor Jones' enjoyable romp. It's essentially the 'pro level' form of ray tracing, this lighting algorithm can produce in-game scenes that look staggeringly authentic. Given the demands of this tech on your GPU, the most graphically exciting development in PC gaming for years will most likely demand you use DLSS 4's x4 or x3 AI frame-generating settings to maintain high frame rates in future implementations. I wasn't surprised that path tracing floored me in CD Projekt Red's seedy yet sensational open-world — I was messing with its path traced photo mode long before DLSS 4 arrived. The quality of the effect cranked to the max in The Great Circle knocked my socks off, though. That stunning screenshot a few paragraphs above is from the game's second level, set in Indy's Marshall College. During a segment where Jones and his vexed buddy Marcus search for clues following a robbery, path tracing gets to really flex its muscles in a sun-dappled room full of antiquities. Dropping down to the highest form of more traditional ray tracing, I was genuinely shocked at just how much more convincing the path traced equivalent looked. So while the technology matures, I hope Nvidia continues to work to reduce latency at these middle-of-the-road frame rates too, so that this AI trickery really hits the spot when maxed out. To be clear to those on the ropes about buying an RTX 5090 — just as we've said in our reviews of the RTX 5060 Ti, 5070 and 5070 Ti, if you own a 40 series-equivalent GPU, you should stick with your current card. You may not get that multi-frame gen goodness, but with DLSS 4 running through its veins, you still get the benefits of Nvidia's latest form of supersampling and its new Transformer model — delivering considerably better anti-aliasing while being less power-hungry than the existing Legacy edition. I don't want to end on a total downer though, so I'll give credit where its due. If you're on a monitor with a blisteringly refresh rate though, I'll admit multi frame generation might be a good suit for your setup. My fondness for the RTX 5090 is only matched by Hannibal Lecter's delight in chowing down on human livers. But for those who hot switch between the desk and the couch like I do, make sure you tweak those settings reflective of your refresh rate.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store