Khan Academy CEO predicts AI in the classroom will be like 5 'amazing graduate students' assisting teachers
Khan said in a BBC interview that AI agents will act as teachers' assistants, not as their substitutes.
Social skills will soon become more important than ever, he added, and students will still go to school to learn them.
AI agents grading papers, monitoring student performance, learning their interests, and making suggestions to the teacher.
That's what the classroom of the future could look like, according to Khan Academy CEO Sal Khan, a learning environment where AI will act as a powerful aide for teachers, not their replacement.
"Imagine if your child's school district just discovered a billion dollars and they decided to hire some amazing graduate students to hang out in the classroom," Khan said in an interview with BBC. "And so, every classroom is going to get four or five of these graduate students. These graduate students are going to be on call for the teacher of the classroom — to help grade papers, to help bounce ideas, think of really creative lesson plans."
The US is suffering from a teaching shortage, with instructors leaving the classroom in droves over the past few years. A combination of conditions — including heavy workloads and minimal pay — is driving away many veteran teachers, while also de-incentivizing new blood from joining the profession.
For those choosing to work in the classroom, Khan said that AI assistants could help alleviate the strain while also improving student engagement.
The AI teaching assistants, he said, will be able to monitor the classroom along with the actual teacher. They'll be able to help children when needed and "report back to the teacher." For example, he added, AI could determine a student's personal interests and suggest relevant adjustments to the lesson plan to increase the odds that the student would engage with the material.
"'Hey, I noticed Catty is not as engaged as she was yesterday,'" Khan said. "Or, 'Look, Sal is really engaged today, why don't you go praise him on this thing that he did last night you might have not noticed?' or 'Did you know that he's really into baseball? Let's make the next example about that, just for Sal.'"
"And then they're able to distill all of that and communicate to the parents, so it's not even once a term, it's almost real-time," he added.
The AI agents could then debrief with the teacher at the end of class to make a game plan for tomorrow, including identifying students who may be in need of some additional one-on-one time.
"I think that would be everyone's dream: the students would love it, the teachers would love it, and the parents would love it — and that's essentially what's going to happen with AI," Khan said.
AI has become an increasing presence in the lives of both students and teachers, and its role in the classroom has been widely debated.
Proponents of its use in the classroom, including some educators and business leaders such as Bill Gates, say it could help alleviate understaffed schools and "enhance" education. Critics of AI in classrooms point to concerns of academic dishonesty and skill atrophy. A scenario like Khan is describing, where agents learn deeply about students' personalities and skills, also opens up privacy and security concerns.
Though parents may have "legitimate fears" that children could outsource their learning to AI, Khan said that even if the tech rapidly develops into a highly capable presence in the classroom, he doesn't believe it could fully take over for human educators. As AI continues to advance, social skills will become more important than ever, he said — and so will the teachers that help impart them.
"It's honestly the main reason why a lot of parents, including myself, feel the need to send their kids to a physical school with other kids, with a social environment, etc.," he said.
As the "human being in the room," teachers will take on responsibilities that AI just can't — like holding students accountable and developing "person-to-person" connections, he added.
"What happens in school, we often focus a lot on just the standards: Can kids factor a polynomial? Can they grammatically correct a sentence?" he said. "Those skills matter, but to some degree, the more important skills are: Can you deal with conflict? Can you be held accountable? Can you communicate? Can you know how to navigate social pressures?"

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
2 minutes ago
- The Verge
Amazon has invested in an AI startup that lets people make TV episodes.
Posted Jul 30, 2025 at 10:51 PM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Jay Peters Posts from this author will be added to your daily email digest and your homepage feed. See All by Jay Peters Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Amazon Posts from this topic will be added to your daily email digest and your homepage feed. See All Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech


Forbes
32 minutes ago
- Forbes
Why Agentic AI Needs An Adaptive Trust Layer
The Internet used to be a place where humans were the dominant force. That's no longer true. As artificial intelligence systems evolve from passive tools to active decision-makers, a new class of threat is emerging—one that traditional security models weren't built to address. Things have shifted pretty quickly from generative AI to seemingly ubiquitous discussion of agentic AI. These aren't just systems that analyze or generate content. They are autonomous actors capable of setting goals, making decisions and executing tasks without human intervention. And while they promise new levels of efficiency and automation, they also introduce new risks—ones that challenge the very foundation of trust online. The Shift to Agentic AI I connected with Stu Solomon, CEO of HUMAN Security, to talk about the challenges posed by agentic AI. He put it bluntly: 'The Internet is no longer dominated by humans.' Bots, scrapers and AI agents now outpace human activity online—and the trend is accelerating. According to Solomon, that shift changes everything about how we define trust and protect digital ecosystems. Agentic AI are already interacting with websites, mobile apps and APIs. They're making purchases, scraping data and even attempting to mimic legitimate user behavior. The problem is that most defenses today are built to detect bots at login or checkout. They weren't designed to handle intelligent agents that can evolve, adapt and act independently across the full customer journey. The Problem with Static Trust Traditional fraud prevention and bot mitigation tools are reactive by nature. They focus on specific threat vectors—usually bots—and make decisions at isolated points like login or transaction submission. But as Solomon points out, 'Security teams need to understand traffic behavior, intent and context, regardless of whether the actor is human, bot, or AI agent.' That's why HUMAN Security is pushing for a new model: adaptive trust. Instead of relying on static checks, adaptive trust continuously evaluates context and behavior to determine whether traffic should be allowed, blocked, or governed more precisely. This approach is core to HUMAN Sightline, now enhanced by a new technology layer the company calls AgenticTrust. It's designed to provide actor-level visibility across humans, bots and AI agents—and make real-time decisions based on observed intent. Understanding Intent in Real Time AgenticTrust operates differently than legacy systems. It doesn't just flag anomalies. It assesses click cadence, navigation patterns, behavioral anomalies and session consistency across billions of interactions to evaluate what an actor is trying to do, not just who or what they claim to be. For instance, if an AI agent is scraping a website or making a purchase, the system determines whether that action aligns with approved behavior. Rather than penalize all AI traffic or ban entire user-agent categories, AgenticTrust provides a way to distinguish the trustworthy from the suspect. It's a 'trust but verify' model—built for the complexity of AI-driven interaction. Open Standards and Cryptographic Identity One of the more notable elements of HUMAN's strategy is its commitment to open standards. The company recently open-sourced its HUMAN Verified AI Agent protocol, a method for AI agents to identify and authenticate themselves using public-key cryptography and HTTP Message Signatures. It's a step toward a more accountable Internet. Instead of spoofable headers and easily faked identifiers, AI agents can prove who they are cryptographically—an important capability as agent impersonation and scraping become more common. 'This project is more than a technical showcase,' says Solomon. 'It's a contribution to the trust layer for the agentic Internet: a future where AI agents must identify, authenticate and authorize themselves in order to operate freely and safely.' Trust Becomes Infrastructure The big picture here is that trust itself must become dynamic infrastructure—something that evolves with the behavior of digital actors, rather than something that's granted once and assumed forever. Solomon summed up, 'This moment is about more than protection. It is about unlocking new value. Businesses that can distinguish between trusted and deceptive actors in real time will be best positioned to scale, innovate and lead in the AI era.' The Internet isn't human-only, but it can still be human-first—if we build the right trust architecture to support it. Agentic AI might change how the Internet works. Adaptive trust could determine whether it still works for people.


Bloomberg
32 minutes ago
- Bloomberg
Palo Alto Networks, CyberArk Reach $25B Deal
Bloomberg's Caroline Hyde and Ed Ludlow speak with the heads of Palo Alto Networks and CyberArk about the $25 billion acquisition deal the cybersecurity companies have reached. Plus, investors look for signs of growth and payoff from AI spending as big tech companies prepare to release earnings. And Anthropic nears a fundraising deal that would value the AI startup at $170 billion. (Source: Bloomberg)