logo
RTX 50 series no longer supports one of Nvidia's oldest GPU technologies

RTX 50 series no longer supports one of Nvidia's oldest GPU technologies

Yahoo20-02-2025

Nvidia has wrapped up support for the 32-bit PhysX graphics technology. The brand has quietly removed the legacy SDK out of rotation, much to the chagrin of fans who still play the games that require the functionality.
Nvidia confirmed the end-of-life status of the GPU-accelerated physics simulation SDK, which was known for cloth simulation, shattering glass, moving liquids, and several other particle effects. PhysX is endearingly associated with AAA gaming titles including the Batman Arkham trilogy, Borderlands: The Pre-Sequel, Borderlands 2, Metro: Last Light, Metro: Exodus, Metro 2033, Mirror's Edge, The Witcher 3, and certain older Assassin's Creed titles.
Despite being popular during the 2000s and early 2010, PhysX began to go out of favor as Nvidia upgraded other aspects of its graphics components. Its RTX 50 series became Nvidia's new standard GPU technology. Meanwhile, PhysX floundered because it was not compatible with other GPUs, consoles, and smartphones, Tom's Hardware noted.
The Verge also noted Redditors discussing their disappointment at the end of support for PhysX. One member detailed forcing PhysX to run the game Borderlands 2 with terrible results. 'Got drops to below 60 FPS by just standing and shooting a shock gun at a wall,' they said.
The commentator claims to have played on a high-end system featuring an RTX 5090 and an AMD Ryzen 7 9800X3D. In comparison, they said they tested the same game with an RTX 4090 GPU and maintained 120fps.
While it's not for everyone, playing legacy games is a common hobby. Tom's Hardware notes enthusiasts can keep PhysX support going on a PC running a RTX 50 series GPU and beyond by pairing it with an RTX 40 series or older graphics card, then tapping the latter component to work in the Nvidia control panel when necessary.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Want Buffett-Style Returns From Artificial Intelligence (AI)? These 2 Stocks Might Deliver.
Want Buffett-Style Returns From Artificial Intelligence (AI)? These 2 Stocks Might Deliver.

Yahoo

timean hour ago

  • Yahoo

Want Buffett-Style Returns From Artificial Intelligence (AI)? These 2 Stocks Might Deliver.

Nvidia remains a must-own stock for AI investors. SoundHound AI could provide even bigger gains. 10 stocks we like better than Nvidia › Warren Buffett's conglomerate Berkshire Hathaway is famous for its long-term performance. Since he took over the company almost six decades ago, shares have produced annualized gains of more than 20%. Few investment vehicles in history have been able to provide this level of returns for such a long stretch of time. While it won't be easy for investors to achieve similar returns, the artificial intelligence (AI) revolution should provide some compelling opportunities. According to forecasts by Grand View Research, the AI market could grow by 30% or more per year over the next decade and beyond. By investing in the right AI stocks today, you could generate Buffett-style returns over time -- and the two look like your best bets right now. While it has gotten a ton of press and attention already, Nvidia (NASDAQ: NVDA) remains a must-own stock for every investor looking to capitalize on the rise of AI. The story here is simple. AI spending has been skyrocketing in recent years, with most forecasts calling for double-digit percentage growth throughout the next decade and beyond. Nearly all of this growth will depend on GPUs and other specialized AI accelerator chips: high-powered parallel processors that provide the specific types of computational muscle to make training and running complex AI models possible. Because it already held the leading position in the GPU space when the AI revolution took off, Nvidia now controls roughly 90% of the market for GPUs destined to support AI applications. So at least right now, if AI innovation and spending continue to rise, Nvidia will directly benefit due to rising demand for its GPUs. Nvidia's dominance in GPUs is no joke. When it unveiled its latest GPU architecture, dubbed Blackwell, last year, the next-gen chips quickly sold out for the next 12 months. According to one report from October, "Customers who have not yet placed an order with the company won't receive any Blackwell products until late 2025." Just to make it clear, Nvidia is by no means competing on price. In fact, it has the highest gross margin by far in the industry, reflecting both its superior offerings and its strong pricing power. There are some long-term concerns. Nvidia may not be able to maintain its high market share, technological superiority, or high gross margins forever. But even if it does cede some market share and profitability, there's a good chance the company can continue to grow at double-digit percentage rates for far longer than its current valuation of 44 times earnings suggests. Nvidia is now worth more than $3.4 trillion -- in the past week, it has been neck and neck with Microsoft for the title of world's largest company. As such, its long-term growth from here may be limited to a degree by its sheer size. If you're looking to maximize your upside potential in exchange for a little more risk, take a look at SoundHound AI (NASDAQ: SOUN). Most investors haven't heard of this company. Even many AI investors aren't aware of it. That's because SoundHound's market cap is just $4 billion. But if you're looking for big gains, relatively small players like this can deliver, even if the investments carry more risk. As its name suggests, SoundHound AI's business involves AI technologies within the audio arena. Think drive-thru windows where your order is taken by an AI agent, customer support lines operated by AI agents, or even in-vehicle AI assistants that can chat with you about the weather or your car's upcoming maintenance needs. SoundHound already has customers piloting its technology in all of these areas. And similar to the rest of the AI industry, the voice AI segment of the market is expected to grow by 30% to 40% per year over the long term. SoundHound faces heavy long-term competition from big tech competitors that have much deeper pockets from which to fund their research and development budgets. But the potential is there. You'll have to get comfortable with the stock's premium valuation though: Shares trade at nearly 40 times sales. And you'll have to monitor customer growth and competitive pressures. But if you're looking for a company that can deliver 20% annual returns over the next decade and beyond, SoundHound certainly has all the right pieces in place. Just know that shares are for risk-tolerant, long-term investors only. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $674,395!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $858,011!* Now, it's worth noting Stock Advisor's total average return is 997% — a market-crushing outperformance compared to 172% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Ryan Vanzo has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Berkshire Hathaway and Nvidia. The Motley Fool has a disclosure policy. Want Buffett-Style Returns From Artificial Intelligence (AI)? These 2 Stocks Might Deliver. was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

AI Can't Replace Education
AI Can't Replace Education

Yahoo

time2 hours ago

  • Yahoo

AI Can't Replace Education

Credit - Tingting Ji—Getty Images As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it. Contact us at letters@

AI Can't Replace Education—Unless We Let It
AI Can't Replace Education—Unless We Let It

Time​ Magazine

time3 hours ago

  • Time​ Magazine

AI Can't Replace Education—Unless We Let It

As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store