7 days ago
AI in education's potential privacy nightmare
AI is now firmly entrenched in classrooms, but student privacy rules haven't caught up.
Why it matters: Chatbots can expose troves of personal data in ways few parents, students or teachers fully understand.
The big picture: The 2025-26 school year is shaping up to be one where educators feel that they must embrace AI to keep students competitive.
Here are three top concerns with classroom AI, according to privacy advocates and AI companies Axios spoke to.
1. Student work could be used to train AI models
AI firms are constantly seeking data to train their models. They're not required to say exactly where they get it, but they do have to say how they're using customer data, especially when they're dealing with students.
Guidelines like The Family Educational Rights and Privacy Act (FERPA) don't guarantee meaningful protections for students. FERPA was signed into law under President Ford in 1974 and has not been significantly updated since.
"Penalty for violating FERPA is that your federal funding is withheld," Elizabeth Laird, director at the Center for Democracy and Technology, told Axios. "And that has been enforced exactly zero times. Literally never."
Most educational AI firms say they're not training models on classroom work.
Content submitted by teachers and students is not used to train the foundational AI models that underlie Khan Academy's AI tutor, Khanmigo, the company's chief learning officer, Kristen DiCerbo, told Axios.
But training on a diverse set of student data would make the models less biased, DiCerbo said: "There's no easy answer to these things, and it's all trade-offs between different priorities."
Institutions technically could allow student work to be used for AI training, though they're unlikely to do so, several educators told Axios.
Yes, but: Data that's "publicly available" on the web is a different story.
Business Insider recently reported on what it described as a list of sites that Anthropic contractors were allowed to scrape — including domains from Harvard, Princeton, Yale, Northwestern and other universities.
Funding mandates often require universities to post student research online, meaning more of it is considered freely available data for training AI.
An Anthropic spokesperson told Axios that it could not validate the list of sites found by Business Insider because it was created by a third-party vendor without Anthropic's involvement.
2. Off-the-shelf AI tools could expose student data
Many teachers are experimenting with free chatbot tools. Some are from well-known players like OpenAI, Google, Perplexity and Anthropic. Others are from lesser-known startups with questionable privacy policies.
In many cases, educators use these apps without district approval or formal guidance.
Accelerating pushes from both big tech and President Trump for school and student adoption of AI have changed the vibe around AI heading into the new academic year, ed tech experts told Axios.
"Where in the 2024-2025 school year most schools had the LLM on lockdown through their filter, this year all flowers will bloom," Tammy Wincup, CEO of Securly, a software company that builds safety tools for K-12 schools, told Axios.
Products designed for educational use, like ChatGPT Edu, do not train on student data, but some of the consumer-facing free and paid versions of ChatGPT and other chatbotshave different policies.
"That's where things get tricky," says Melissa Loble, chief academic officer at Instructure, the company behind the learning management system known as Canvas. "If AI tools are used outside our system, the data may not be protected under the school's policies."
Yes but: Teachers are often the best judges of AI tools for their students.
Ed tech is "a bottom-up adoption industry. It grows and thrives on teachers finding tools they like for teaching and learning and then getting districts to adopt," Wincup says.
3. Hacks are an increasing threat
Earlier this year, a breach at PowerSchool — a widely used student information system — exposed sensitive personal data of tens of thousands of students and parents.
"When you introduce any new tool, when you collect any new piece of information, you are necessarily introducing increased risk," Laird says. That makes thoughtful planning critical, she added.
If AI tools store or process student data, a breach could expose not just grades and attendance records but also behavioral data, writing samples, and private communications.
One way to prevent leaks is to delete data periodically. DiCerbo says Khan Academy deletes chats after 365 days.
Yes, but: The advantage of using chatbots is that they can remember and learn from previous conversations, so some users want to store more information than might be safe.
Between the lines: AI is steamrolling into classrooms and colleges and privacy is just one on a long list of concerns these institutions must manage.
Khan Academy's DiCerbo says AI adoption is moving faster than anything she's seen in her 20 years working in ed tech. Khan Academy expects to reach a million students with its AI-powered tutor Khanmigo that launched in 2023.
Earlier this year the California State University system introduced ChatGPT Edu to more than 460,000 students and over 63,000 staff and faculty across its 23 campuses.
Google just started offering its AI Pro plan for free to students over 18 for a year.
What we're watching: Some ed tech providers are looking beyond OpenAI, Anthropic and Google and using services like AWS and Microsoft's Azure to keep student data separate from the model providers.
Brisk Teaching, a classroom AI assistant, uses this approach to mitigate concerns that student data might be used to train new models — even though OpenAI and Google say that their education-focused models don't train on user data.
Brisk Teaching founder Arman Jaffer told Axios that there's a lot of "lost trust" between schools and the big AI providers. "It's just easier for us to say Google is not touching your data because they could potentially use it to train the next version of their model," he said.