logo
AI Can't Replace Education—Unless We Let It

AI Can't Replace Education—Unless We Let It

Time​ Magazine9 hours ago

As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless?
Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete.
While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring.
Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world?
AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability.
Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding.
That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method.
That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us.
I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop.
Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature.
The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process.
The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this.
We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations.
After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding.
This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.'
AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter.
The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Week in Review: Why Anthropic cut access to Windsurf
Week in Review: Why Anthropic cut access to Windsurf

Yahoo

time41 minutes ago

  • Yahoo

Week in Review: Why Anthropic cut access to Windsurf

Welcome back to Week in Review! Got lots for you today, including why Windsurf lost access to Claude, ChatGPT's new features, WWDC 2025, Elon Musk's fight with Donald Trump, and lots more. Have a great weekend! Duh: During an interview at TC Sessions: AI 2025, Anthropic's co-founder had a perfectly reasonable explanation for why the company cut access to Windsurf: 'I think it would be odd for us to be selling Claude to OpenAI,' Chief Science Officer Jared Kaplan said, referring to rumors and reports that OpenAI, its largest competitor, is acquiring the AI coding assistant. Seems like a good reason to me! Everything is the same: Chinese lab DeepSeek released an updated version of its R1 reasoning AI model last week that performs well on a number of math and coding benchmarks. Now some AI researchers are speculating that at least some of the source data it trained on came from Google's Gemini family of AI. WWDC 2025: Apple's annual developers conference starts Monday. Beyond a newly designed operating system, here's what we're expecting to see at this year's event, including a dedicated gaming app and updates to Mac, Watch, TV, and more. This is TechCrunch's Week in Review, where we recap the week's biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here. Business in the front: ChatGPT is getting new features for business users, including connectors for Dropbox, Box, SharePoint, OneDrive, and Google Drive. This would let ChatGPT look for information across your own services to answer questions. Oh no: Indian grocery delivery startup KiranaPro was hacked, and all of its data was wiped. According to the company, it has 55,000 customers, with 30,000 to 35,000 active buyers across 50 cities, who collectively place 2,000 orders daily. Artsy people, rejoice! Photoshop is now coming to Android, so users of Google's operating system can gussy up their images, too. The app has a similar set of editing tools as the desktop version, including layering and masking. Let's try that again: Tesla filed new trademark applications for "Tesla Robotaxi" after previous attempts to trademark the terms 'Robotaxi' and 'Cybercab" failed. Rolling in dough: Tech startup Anduril just picked up a $1 billion investment as part of a new $2.5 billion raise led by Founders Fund, which means Anduril has doubled its valuation to $30.5 billion. On the road again: When Toma's founders realized car dealerships were drowning in missed calls, they hit the road to see the problem firsthand. That summer road trip turned into a $17 million a16z-backed fundraise that helped Toma get its AI phone agents into more than 100 dealerships across the U.S. Fighting season: All gloves were off on Thursday as Elon Musk and President Trump took to their respective social networks to throw jabs at each other. Though it might be exciting to watch rich men squabble in public, the fallout between the world's richest person and a sitting U.S. president promises to have broader implications for the tech industry. Money talks: Whether you use AI as a friend, a therapist, or even a girlfriend, chatbots are trained to keep you talking. For Big Tech companies, it's never been more competitive to attract users to their chatbot platforms — and keep them there. This article originally appeared on TechCrunch at

New PSA Uses AI Fakery To Warn About AI Fakery
New PSA Uses AI Fakery To Warn About AI Fakery

Forbes

timean hour ago

  • Forbes

New PSA Uses AI Fakery To Warn About AI Fakery

Remember when a boat crew rescued a polar bear last year, but really didn't because the widely ... More shared video was AI-generated? For a greatest-hits parade of viral AI images, just watch a new public service announcement designed to help people spot AI fakery. You'll get such classics as a skydiving baby, the late Pope Francis wearing a designer puffer coat, a boat crew not actually rescuing a polar bear cub and, of course, Will Smith slurping spaghetti. That meme from a couple of years back became an early benchmark of just how far AI images had to go to appear convincing. A lot has changed since then, with AI tools for generating images and videos progressing toward realism at an astounding rate. The advancements open up countless creative possibilities, but with them comes a higher risk that people will confuse fake for real, sometimes with serious consequences. 'At stake is trust in substantiated information and the health of democracy,' said Betsy Morais, acting editor of the Columbia Journalism Review, the magazine for journalists published by Columbia University's Graduate School of Journalism. That's what prompted CJR to launch PSAi, a campaign aimed at helping the public distinguish real from AI-generated. 'For more than a century, photography has been one of the greatest tools journalists have had to establish truth and trust,' reads a description of the campaign. 'But AI-generated images are making it increasingly hard to distinguish real from fake media.' As part of PSAi, CJR partnered with creative agency TBWA\Chiat\Day New York to produce a short, snappy music video that serves as a crash course in AI-spotting basics. The video pairs some of the best-known AI-generated images out there with a song in the style of '90s rap delivering tips that might seem obvious to those attuned to AI oddities, but aren't to the scores who get fooled. 'Extra fingers are a telltale sign, so is skin with the perfect shine,' the lyrics go. 'Flawless hair that's smooth and clean, warped faces in the back of the scene.' To make the video, the TBWA\Chiat\Day team scoured social media platforms to collect AI-generated images ranging from the emotionally charged — a terrified little girl paddling through the floodwaters of Hurricane Katrina holding her dog — to the downright bizarre (I'm looking at you 'Shrimp Jesus'). The images were chosen not only because they were 'liked' and shared widely, but because they misled people into believing they were legitimate. The team animated the images using Runway's AI video generation and alteration tools, specifically its Act-One feature, which makes it possible to animate facial expressions and sync mouth movements with speech. In the case of the PSAi video, image subjects appear to be lip-synching the rap lyrics. 'That's the way you spot AI,' Meta CEO Mark Zuckerberg appears to sing as he peeps into a window from outside (remember that one?). An iProov study earlier this year revealed that many people can't spot a deepfake, with 20% of consumers polled saying they don't even know what one is. Dustin Tomes, chief creative officer of TBWA\Chiat\Day New York, stressed that PSAi isn't intended as an anti-AI statement, but an awareness-building toolkit. 'The PSAi is designed to give people simple, effective tools to spot the difference, without requiring too much effort,' Tomes said in a statement. 'This isn't a silver bullet, but by delivering this education in a memorable way, we hope more people will engage and apply these lessons. After all, fake news doesn't spread itself.' The video certainly is attention-grabbing, even without an appearance by the internet's favorite viral 'emotional support kangaroo.' The creative team behind PSAi scoured social media for AI-generated images that were liked and ... More shared widely, like this one of Meta CEO Mark Zuckerberg.

Prediction: This Artificial Intelligence (AI) Stock Could Hit a $6 Trillion Valuation by 2030
Prediction: This Artificial Intelligence (AI) Stock Could Hit a $6 Trillion Valuation by 2030

Yahoo

timean hour ago

  • Yahoo

Prediction: This Artificial Intelligence (AI) Stock Could Hit a $6 Trillion Valuation by 2030

Nvidia is currently the world's largest company by market cap. The semiconductor giant still has a lot of room for growth, considering the potential investment in artificial intelligence (AI) infrastructure over the next five years. Nvidia has additional catalysts coming into play that could allow it to sustain terrific growth and hit a $6 trillion valuation in the future. 10 stocks we like better than Nvidia › Nvidia (NASDAQ: NVDA) is the most valuable company in the world as of this writing, with a market cap of $3.4 trillion, and it has reached this position thanks to a tremendous rally of more than 1,500% in its stock price in the past five years. Investors will now be wondering if Nvidia has the potential to deliver more upside over the next five years following this phenomenal run. However, don't be surprised to see this semiconductor giant's shares jump higher and attain a $6 trillion valuation by the end of the decade. Let's take a closer look at the factors that could help Nvidia hit that milestone by 2030. Robust demand for Nvidia's AI chips has been the biggest reason behind the stock's terrific surge in recent years. The good part is that Nvidia continues to generate a massive amount of revenue from its AI chip business despite tackling headwinds such as export restrictions to key markets like China. This was evident from Nvidia's latest results for the first quarter of fiscal 2026 (which ended on April 27). The company's revenue shot up 69% year over year to $44.1 billion during the quarter, even though it lost $2.5 billion in revenue in fiscal Q1, owing to the restrictions on sales of its chips to China. The chipmaker also incurred a $4.5 billion inventory charge to write down the value of chips that were intended for the Chinese market. Moreover, Nvidia's fiscal Q2 revenue would take an $8 billion hit on account of the China-related restrictions. But the good part is that the company's guidance for the current quarter still calls for a 50% year-over-year increase in revenue, while its earnings are expected to increase by 44% despite anticipated loss in Chinese revenue. CEO Jensen Huang admitted on Nvidia's latest earnings conference call that the $50 billion Chinese market is now effectively closed to U.S. players such as Nvidia. Even then, analysts have increased their revenue estimates. That's not surprising, as Nvidia still has a massive sales opportunity in AI chips beyond the Chinese market. It is now entering new markets such as Saudi Arabia to build AI factories "powered by several hundred thousand of Nvidia's most advanced GPUs over the next five years," according to the company. Additionally, massive AI infrastructure projects such as Stargate, from which Nvidia has started benefiting already, could help it mitigate the lost opportunity in China. Management consulting firm McKinsey & Company predicts that AI-capable data centers could require investments worth a whopping $5.2 trillion by 2030 to build enough computing power to handle training and inference workloads. So investors would do well to look past the China-related problems that Nvidia is currently facing, as the broader opportunity in the AI data center market should be lucrative enough to help the chipmaker keep growing at a healthy pace for the next five years. Moreover, Nvidia is showing no signs of losing its grip over the AI chip market. Its data center revenue shot up an impressive 73% year over year to $39 billion in the previous quarter. That was miles ahead of Broadcom's $4 billion AI revenue and AMD's $3.7 billion data center sales in the previous quarter, the two chipmakers that are considered to be the closest to Nvidia in the AI chip race. Nvidia's data center growth was higher than the 57% growth recorded by AMD in this segment and close to the 77% growth in Broadcom's AI revenue, even though it has a much larger revenue base. This is a testament to just how popular Nvidia's AI chips are, with the company's latest generation of Blackwell processors already a major hit among cloud computing giants within two quarters of hitting the market. Even better, Nvidia has moved past just selling AI hardware. It also offers access to models that help customers train and deploy AI agents, along with other enterprise AI applications that allow customers to improve the efficiency of their large language models (LLMs). Its enterprise platforms are gaining traction in diverse industries such as cybersecurity and restaurants where companies are deploying Nvidia's solutions to streamline their operations or to build agentic AI applications. All this indicates that investors shouldn't miss the forest for the trees, as Nvidia's long-term prospects aren't dependent on just China. There is still a lot of room for growth in the AI chip market, and the company's diversification into other areas such as enterprise AI applications and automotive should be enough to power remarkable growth over the next five years. Nvidia is currently trading at 23 times sales. While that's three times the U.S. technology sector's average price-to-sales ratio, the company's dominant position in AI chips, the prospects of this market, and the other catalysts that are coming into play help justify that valuation. We have already seen in the chart that Nvidia's top line is expected to jump to $292 billion in three years. If it maintains its sales multiple at that time, it will be able to easily surpass a $6 trillion valuation in just three years, representing a big jump from current levels. However, if we assume Nvidia's top-line growth slows after fiscal 2028 to an annual rate of 15% in fiscal 2029 and 2030 from the 31% compound annual growth rate that it is forecast to clock between fiscal 2026 and 2028 (using fiscal 2025 revenue of $130.5 billion as the base), its annual revenue could jump to $386 billion after five years. If Nvidia trades at a discounted 15 times sales at that time, it could still hit a $6 trillion valuation by 2030. So, investors can still consider buying this AI stock in anticipation of more upside in the long run, as it seems capable of maintaining its healthy growth rate over the next five years. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $669,517!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $868,615!* Now, it's worth noting Stock Advisor's total average return is 792% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices and Nvidia. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy. Prediction: This Artificial Intelligence (AI) Stock Could Hit a $6 Trillion Valuation by 2030 was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store