Latest news with #GDDR6


Digital Trends
06-05-2025
- Digital Trends
We just got our first glimpse of AMD's RX 9060 XT, but many questions remain
The start of the year was littered with some of the best graphics cards (although, admittedly, they weren't really up for grabs due to the state of the market). Now, it's time for the mainstream GPUs to make an appearance. AMD's RX 9060 XT is one such GPU that many gamers are waiting for, and we just spotted it in a retail listing, indicating the launch might not be too far off. Spotted by VideoCardz, the GPU broke cover at a Brazilian retailer, Terabyteshop. The Gigabyte AMD Radeon RX 9060 Gaming OC was the only model listed, but it came with an official-looking AMD blurb and a spec sheet, although the most important detail was missing: The price. First, let's dig into what we do know, and then we can discuss what we still need to learn. Recommended Videos The RX 9060 XT comes with 16GB of GDDR6 memory, and although the spec sheet failed to mention that, most leakers expect a narrow 128-bit bus. The maximum resolution is listed as 7,680 x 4,320, with support for up to four displays. The card supports DirectX 12 and OpenGL 4.6, comes with two DisplayPort 2.1a and two HDMI 2.1a connectors, and uses only a single 8-pin power connector. Despite that fact, the spec sheet still calls for a power requirement of 850 watts, which sounds way overkill for a graphics card of this caliber, so we could be dealing with a mistake in the specifications. Of course, until the card is officially released, we can't take these specs at face value — but this source seems pretty legitimate, so a lot of it might be true. With that said, we're still dealing with a bunch of question marks. For starters, what's the memory interface like? What's the maximum clock speed? Most of all, how much will this GPU cost? AMD's RX 9070 XT struck a goldmine by offering great value for the money. Unfortunately, once the initial stock of MSRP-priced GPUs was all sold out, the card drastically increased in price. If AMD can offer a steady supply of the RX 9060 XT at around $300 or below, it could be a hit, but the 'steady supply' part of it all is what I'm worried about. AMD has yet to officially launch the card. Rumor has it that the RX 9060 XT will be officially announced on May 21 during Computex 2025. Please enable Javascript to view this content


Forbes
19-04-2025
- Business
- Forbes
AI Gets Memory With Chips From Micron And Others
Integrated Circuit, Film-Layout of a Printed Circuit Board. (Photo by Mediacolors/Construction ... More Photography/Avalon/Getty Images) News broke yesterday that Micron Technology is shaking things up with a new focus on a 'cloud memory business unit' that will create something called HBM chips or high bandwidth memory chips. HBM chips are 3-D stacked SDRAM microprocessors traditionally used with high-performance hardware setups. Over in the world of model design, we're seeing LLMs get more memory capacity and more utility out of the context data that they keep in memory. So it makes sense that this hardware revolution would be occurring. The interesting thing is who the players are. Insiders note that Micron is a top global provider of HBM chips, along with Samsung and a company called SK Hynix. So who's actually making these chips? Take Samsung, for example. Industry news reveals that Samsung is working with its rival foundry partner TSMC to develop the HBM chips. We've seen so many times how TSMC has a dominant position in the market as a foundry. Other companies use TSMC for the raw fabrication power, and develop their own plans on top of TSMC's production capability. That in turn has led to everything from a shortage of vehicle chips, to more recently, some troublesome geopolitical problems around production having to do with export controls. It seems like the world would be in a lot better shape if there were, say, a dozen foundry makers around the world. Anyway, in creating these high-design chips, do Samsung and TSMC compete with Nvidia? Not exactly. Other industry reporting shows that Nvidia was planning to buy the chips from Samsung, but the vendor company couldn't meet Nvidia's bar. A March 20 press release shows Nvidia CEO Jensen Huang saying Samsung 'has an important part to play,' but noting that the company hasn't formally ordered Samsung HBM3E chips. First of all, the HBM chip is a 3-D stacked DRAM type of chip. The memory unit sits close to a CPU or GPU, to conquer latency and provide high bandwidth with low power consumption. I asked ChatGPT more about the specs for these chips, and it came out with this: Bandwidth: 819 GB per second, per stack Speed: 6.4 GB per pin Capacity: up to 64 GB per stack Thermals: better efficiency Use cases: AI, HPC, GPUs (In this context, we're talking mainly about using it for AI applications.) ChatGPT also gave me this interesting graphic comparing the HBM's build to something called GDDR6, a gaming chip that's cheaper and more widely available: You can get more from public resources like this one on how the HBM has been engineered to fit very specific needs. Let's look briefly at this corner of the tech market, for enterprise context that CEOs (or anyone else) might want to know about. First, we have Nvidia down around 40% from all-time highs within the past year, and crawling back down toward $100 per share in recent trading cycles, ostensibly based on U.S. export controls. The assertion of Huang and company that Nvidia is poised to lose $5.5 billion due to new rules has been big news lately. Then there's Micron, at around $70 per share currently, about one half of all-time high values, and down significantly since winter. As for Samsung, which looks like it's down 8% in a short time frame. Companies like AMD are also down. 'A warning from AI chips champion Nvidia that it will face a $5.5 billion hit from tightened U.S. controls on exports to China marks a new chapter in the escalating tit-for-tat between Washington and Beijing,' AJ Bell investment director Russ Mould said, as quoted by Elsa Ohlen writing for Barron's. That's a little on some of the great new hardware developments happening now. The context, in terms of LLM news, is the advancement of models with persistent memory. I've talked about using an AI chat companion from Sesame, for example, and how 'Maya' seemed to remember my name, as a return user, on a good day. Along with chain of thought, memory is a big capability builder for all of those vibrant use cases that we have come to expect from our neural net friends and neighbors.