
Score! RTX 5070 Ti OLED gaming laptop is $450 off for a limited time
Summer is upon us, and with it comes the first major discounts I've seen on gaming laptops packing the latest Nvidia GeForce RTX 50-series graphics cards.
The best deal I've seen so far is this Lenovo Legion Pro 7i with RTX 5070 Ti for $2,399 at B&H, which knocks nearly $500 off the asking price for this high-end gaming laptop with one of the newest Nvidia GeForce RTX 50-series GPUs you can get.
This Lenovo Legion Pro 7i is a cutting-edge gaming laptop thanks to its Nvidia GeForce RTX 5070 Ti GPU, the Intel Core i9-275HX CPU, 32GB of RAM and 2TB of storage. That's more than enough power to make all your favorite games run great on the 16-inch 1600p 240Hz OLED display.
The Nvidia GeForce RTX 5070 Ti hit the market just a few months ago, and it looks to be the ideal value offering in the RTX 50-series lineup right now. And while it's not the highest-end 50-series card, it offers more than enough muscle to run even the best PC games well on this machine.
Plus, the laptop itself is a well-designed 16-inch gaming notebook that's equally good for gaming or productivity work. If you read our Lenovo Legion Pro 7i review, you can see how thin and elegant it is in person, along with shots of the plentiful port array and test results, which prove why it ranks among the best gaming laptops on the market.
That 16-inch (2560 x 1600) 240Hz OLED display looks lovely to boot, and it will make all your favorite games and movies look fantastic—and since it supports HDR and Dolby Vision, you can enjoy your media to the fullest.
Of course, we haven't had a chance to test this RTX 5070 Ti version yet, but it's sure to outperform its predecessors and run games well thanks to the power of Nvidia's latest laptop GPUs. Factor in the 32GB of DDR5 RAM and 2TB of SSD storage, and you see why you don't have to stress about this laptop running out of RAM or room for your favorite games anytime soon.
With Wi-Fi 7 and a full, comfy keyboard, you can cart this beast to the coffee shop when you want to work, and when you're done, you can lug it back to the living room and play PC games on your big screen via the HDMI 2.1, Thunderbolt 4 or USB-C ports. You also get USB-A and RJ-45 Ethernet ports, so you can count on being able to plug in old accessories and jack into wired Internet when gaming online.
Admittedly, this is a hefty beast that weighs over six pounds, so you'll probably want to keep it on your desk or coffee table most of the time. But that's true of most gaming laptops, and for my money, this is the best deal on an RTX 50-series machine I've seen all month.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
15 minutes ago
- CNBC
Nvidia's first GPU was made in France — Macron wants the country to produce cutting edge chips again
French President Emmanuel Macron on Wednesday made a pitch for his country to manufacture the most advanced chips in the world, in a bid to position itself as a critical tech hub in Europe. The comments come as European tech companies and countries are reassessing their reliance on foreign technology firms for critical technology and infrastructure. Chipmaking in particular arose as a topic after Nvidia CEO Jensen Huang, who was doing a panel talk alongside Macron and Mistral AI CEO Arthur Mensch, said on Wednesday that the company's first graphics processing unit (GPU) was manufactured in France by SGS Thomson Microelectronics, now known as STMicroelectronics. Yet STMicroelectronics is currently not at the leading edge of semiconductor manufacturing. Most of the chips it makes are for industries like the automotive one, which don't required the most cutting-edge semiconductors. Macron nevertheless laid his ambition out for France to be able to manufacture semiconductors in the range of 2 nanometers to 10 nanometers. "If we want to consolidate our industry, we have now to get more and more of the chips at the right scale," Macron said on Wednesday. The smaller the nanometer number, the more transistors that can be fit into a chip, leading to a more powerful semiconductor. Apple's latest iPhone chips, for instance, are based on 3 nanometer technology. Very few companies are able to manufacture chips at this level and on a large scale, with Samsung and Nvidia provider Taiwan Semiconductor Manufacturing Co. (TSMC) leading the pack. If France wants to produce these cutting-edge chips, it will likely need TSMC or Samsung to build a factory locally — something that has been happening in the U.S. TSMC has now committed billions of dollars to build more factories Stateside. Macron touted a deal between Thales, Radiall and Taiwan's Foxconn, which are exploring setting up a semiconductor assembly and test facility in France. "I want to convince them to make the manufacturing in France," Macron said during VivaTech — one of France's biggest tech events — on the same day Nvidia's Huang announced a slew of deals to build more artificial intelligence infrastructure in Europe. One key partnership announced by Huang is between Nvidia and French AI model firm Mistral to build a so-called "AI cloud." France has looked to build out its AI infrastructure and Macron in February said that the country's AI sector would receive 109 billion euros ($125.6 billion) in private investments in the coming years. Macron touted the Nvidia and Mistral deal as an extension of France's AI buildout. "We are deepening them [investments] and we are accelerating. And what Mistral AI and Nvidia announced this morning is a game-changer as well," Macron told CNBC on Wednesday.


Business Insider
an hour ago
- Business Insider
‘Don't Bet Against It': $4 Trillion in Sight for Nvidia Stock, Says Investor
Nvidia (NASDAQ:NVDA) has regained much of its shine over the last two months, catching fire once more with a bull run that has boosted its share price by some 50%. Confident Investing Starts Here: This sharp rebound came after a rare rough patch for the AI chipmaker earlier in 2025, when the stock struggled under the weight of tariff shocks, export restrictions, and concerns over reduced capex spending by hyperscalers. The tide began to turn as geopolitical tensions eased and major cloud players reaffirmed their investment plans, restoring confidence in the AI infrastructure boom. A strong earnings report and bullish guidance in late May further reinforced that momentum. However, even with the renewed optimism, not all signals are flashing green. Nvidia's revenue growth is beginning to decelerate – a likely outcome given its enormous scale – prompting some investors to question how much upside is still left. Could this be the point where enthusiasm gives way to caution? One investor, known by the pseudonym Cash Flow Venue, thinks the best course of action is to take a deep breath – and enjoy the ride. 'Let go of valuation concerns and wait for a $4 trillion+ valuation,' explains Cash Flow Venue, who urges investors to simply 'follow the money!' Looking at its recent Fiscal Q1 2026 earnings report, Cash Flow Venue cites the company's impressive year-over-year growth of 69% as a sign that Nvidia has no problem generating cash. Meanwhile, the company's EBITDA for the trailing twelve months has grown to some $91 billion, with the pace of the gains outpacing the rise in Nvidia's Enterprise Value. 'Nvidia's bears often forget that its valuation growth wasn't detached from the business growth. Even more, the business grew more dynamically than the valuation,' Cash Flow Venue noted. And there are plenty of drivers to boost additional growth, the investor hastens to add. With its top-tier hardware, CUDA software, and robust finances, Cash Flow Venue believes that CEO Jense Huang's optimism regarding Nvidia's future is certainly justified. With the pole position in the AI race and plenty of momentum on its side, it's an easy decision for this investor. 'A 'strong buy' business with the best capabilities to capitalize on the new 'industrial revolution,'' concludes Cash Flow Venue (To watch Cash Flow Venue's track record, click here) That's the gist on Wall Street as well. With 35 Buy, 4 Hold, and 1 Sell recommendations, NVDA continues to be a consensus Strong Buy. Its 12-month average price target of $172.36 indicates that despite its recent surge, analysts still see an upside of ~21% up ahead. (See NVDA stock forecast) To find good ideas for stocks trading at attractive valuations, visit TipRanks' Best Stocks to Buy, a tool that unites all of TipRanks' equity insights.


Forbes
2 hours ago
- Forbes
HBM And Emerging Memory Technologies Enable AI Training And Inference
AI During congressional hearing in the House of Representatives' Energy & Commerce Committee Subcommittee of Communication and Technology, Ronnie Vasishta, Senior VP of telecom at Nvidia said that mobile networks will be called upon to support a new kind of traffic—AI traffic. This AI traffic includes the delivery of AI services to the edge, or inferencing at the edge. Such growth in AI data could reverse the general trend towards lower growth in traffic on mobile networks. Many AI-enabled applications will require mobile connectivity including autonomous vehicles, smart glasses, generative AI services and many other applications. He said that the transmission of this massive increase in data needs to be resilient, fit for purpose, and secure. Supporting this creation of data from AI will require large amount of memory, particularly very high bandwidth memory, such as HBM. This will result in great demand for memory that supports AI applications. Micron announced that it is now shipping HBM4 memory to key customers, these are for early qualification efforts. The Micron HBM4 provides up to 2.0TB/s bandwidth and 24GB capacity per 12-high die stack. The company says that their HBM4 uses its 1-beta DRAM node, advanced through silicon via technologies, and has a highly capable built-in self-test. See image below. Micron HBM4 Memory HBM memory consisting of stacks of DRAM die with massively parallel interconnects to provide high bandwidth are combined GPU's such as those from Nvidia. This memory close to the processor allows training and inference of various AI models. The current generation of HBM memory used in current GPUs use HBM3e memory. At the 2025 March GTC in San Jose, Jensen Huang said that Micron HBM memory was being used in some of their GPU platforms. The manufacturers of HBM memories are SK Hynix, Samsung and Micron with SK Hynix and Samsung providing the majority of supply and with Micron coming in third. SK hynix was the first to announce HBM memory in 2013, which was adopted as an industry standard by JEDEC that same year. Samsung followed in 2016 and in 2020 Micron said that it would create its own HBM memory. All of these companies expect to be shipping HBM4 memories in volume by sometime in 2026. Numen, a company involved in magnetic random access memory applications, recently talked about how traditional memories used in AI applications, such as DRAM and SRAM have limitations in power, bandwidth and storage density. They said that processing performance has skyrocketed by 60,000X over the past 20 years but DRAM bandwidth has improved only 100X, creating a 'memory wall.' The company says that its AI Memory Engine is a highly configurable memory subsystem IP that enables significant improvements in power efficiency, performance, intelligence, and endurance. This is not only for Numem's MRAM-based architecture, but also third-party MRAMs, RRAM, PCRAM, and Flash Memory. Numem said that it has developed next-generation MRAM supporting die densities up to 1GB which can deliver SRAM-class performance with up to 2.5X higher memory density in embedded applications and 100X lower standby power consumption. The company says that its solutions are foundry-ready and production-capable today. Coughlin Associates and Objective Analysis in their Deep Look at New Memories report predict that AI and other memory-intensive applications, including the use of AI inference in embedded devices such as smart watches, hearing aids and other applications are already using MRAM, RRAM and other emerging memory technologies will decrease the costs and increase production of these memories. These memories technologies are already available from major semiconductor foundries. They scale to smaller lithographic scaling that DRAM and SRAM and because they are non-volatile, no refreshes are needed and so they consume less power. As a result, these memories allow more memory capacity and lower power consumption in space and power constrained environments. MRAM and RRAM are also being built into industrial, enterprise and data center applications. The figure below shows our projections for replacement of traditional memories, SRAM, DRAM, NOR and NAND Flash memory by these emerging memories. NOR and SRAM, in particular, for embedded memories are projected to be replaced by these new memories within the next decade as part of a future $100B memory market. Projected replacement of conventional memories with new memories AI will generate increased demand for memory to support training and inference. It will also increase the demand for data over mobile networks. This will drive demand for HBM memory but also increase demand for new emerging memory technologies.