Latest news with #ElizabethLaird


Axios
2 days ago
- Business
- Axios
AI in education's potential privacy nightmare
AI is now firmly entrenched in classrooms, but student privacy rules haven't caught up. Why it matters: Chatbots can expose troves of personal data in ways few parents, students or teachers fully understand. The big picture: The 2025-26 school year is shaping up to be one where educators feel that they must embrace AI to keep students competitive. Here are three top concerns with classroom AI, according to privacy advocates and AI companies Axios spoke to. 1. Student work could be used to train AI models AI firms are constantly seeking data to train their models. They're not required to say exactly where they get it, but they do have to say how they're using customer data, especially when they're dealing with students. Guidelines like The Family Educational Rights and Privacy Act (FERPA) don't guarantee meaningful protections for students. FERPA was signed into law under President Ford in 1974 and has not been significantly updated since. "Penalty for violating FERPA is that your federal funding is withheld," Elizabeth Laird, director at the Center for Democracy and Technology, told Axios. "And that has been enforced exactly zero times. Literally never." Most educational AI firms say they're not training models on classroom work. Content submitted by teachers and students is not used to train the foundational AI models that underlie Khan Academy's AI tutor, Khanmigo, the company's chief learning officer, Kristen DiCerbo, told Axios. But training on a diverse set of student data would make the models less biased, DiCerbo said: "There's no easy answer to these things, and it's all trade-offs between different priorities." Institutions technically could allow student work to be used for AI training, though they're unlikely to do so, several educators told Axios. Yes, but: Data that's "publicly available" on the web is a different story. Business Insider recently reported on what it described as a list of sites that Anthropic contractors were allowed to scrape — including domains from Harvard, Princeton, Yale, Northwestern and other universities. Funding mandates often require universities to post student research online, meaning more of it is considered freely available data for training AI. An Anthropic spokesperson told Axios that it could not validate the list of sites found by Business Insider because it was created by a third-party vendor without Anthropic's involvement. 2. Off-the-shelf AI tools could expose student data Many teachers are experimenting with free chatbot tools. Some are from well-known players like OpenAI, Google, Perplexity and Anthropic. Others are from lesser-known startups with questionable privacy policies. In many cases, educators use these apps without district approval or formal guidance. Accelerating pushes from both big tech and President Trump for school and student adoption of AI have changed the vibe around AI heading into the new academic year, ed tech experts told Axios. "Where in the 2024-2025 school year most schools had the LLM on lockdown through their filter, this year all flowers will bloom," Tammy Wincup, CEO of Securly, a software company that builds safety tools for K-12 schools, told Axios. Products designed for educational use, like ChatGPT Edu, do not train on student data, but some of the consumer-facing free and paid versions of ChatGPT and other chatbotshave different policies. "That's where things get tricky," says Melissa Loble, chief academic officer at Instructure, the company behind the learning management system known as Canvas. "If AI tools are used outside our system, the data may not be protected under the school's policies." Yes but: Teachers are often the best judges of AI tools for their students. Ed tech is "a bottom-up adoption industry. It grows and thrives on teachers finding tools they like for teaching and learning and then getting districts to adopt," Wincup says. 3. Hacks are an increasing threat Earlier this year, a breach at PowerSchool — a widely used student information system — exposed sensitive personal data of tens of thousands of students and parents. "When you introduce any new tool, when you collect any new piece of information, you are necessarily introducing increased risk," Laird says. That makes thoughtful planning critical, she added. If AI tools store or process student data, a breach could expose not just grades and attendance records but also behavioral data, writing samples, and private communications. One way to prevent leaks is to delete data periodically. DiCerbo says Khan Academy deletes chats after 365 days. Yes, but: The advantage of using chatbots is that they can remember and learn from previous conversations, so some users want to store more information than might be safe. Between the lines: AI is steamrolling into classrooms and colleges and privacy is just one on a long list of concerns these institutions must manage. Khan Academy's DiCerbo says AI adoption is moving faster than anything she's seen in her 20 years working in ed tech. Khan Academy expects to reach a million students with its AI-powered tutor Khanmigo that launched in 2023. Earlier this year the California State University system introduced ChatGPT Edu to more than 460,000 students and over 63,000 staff and faculty across its 23 campuses. Google just started offering its AI Pro plan for free to students over 18 for a year. What we're watching: Some ed tech providers are looking beyond OpenAI, Anthropic and Google and using services like AWS and Microsoft's Azure to keep student data separate from the model providers. Brisk Teaching, a classroom AI assistant, uses this approach to mitigate concerns that student data might be used to train new models — even though OpenAI and Google say that their education-focused models don't train on user data. Brisk Teaching founder Arman Jaffer told Axios that there's a lot of "lost trust" between schools and the big AI providers. "It's just easier for us to say Google is not touching your data because they could potentially use it to train the next version of their model," he said.


Boston Globe
07-08-2025
- Boston Globe
Students have been called to the office — and even arrested — for AI surveillance false alarms
Earlier in the day, her friends had teased the teen about her tanned complexion and called her 'Mexican,' even though she's not. When a friend asked what she was planning for Thursday, she wrote: 'on Thursday we kill all the Mexico's.' Mathis said the comments were 'wrong' and 'stupid,' but context showed they were not a threat. 'It made me feel like, is this the America we live in?' Mathis said of her daughter's arrest. 'And it was this stupid, stupid technology that is just going through picking up random words and not looking at context.' Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices. Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids' online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement. Advertisement Educators say the technology has saved lives. But critics warn it can criminalize children for careless words. 'It has routinized law enforcement access and presence in students' lives, including in their home,' said Elizabeth Laird, a director at the Center for Democracy and Technology. Advertisement Schools ratchet up vigilance for threats In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement. The 13-year-old girl arrested in August 2023 had been texting with friends on a chat function tied to her school email at Fairview Middle School, which uses Gaggle to monitor students' accounts. (The Associated Press is withholding the girl's name to protect her privacy. The school district did not respond to a request for comment.) Taken to jail, the teen was interrogated and strip-searched, and her parents weren't allowed to talk to her until the next day, according to a lawsuit they filed against the school system. She didn't know why her parents weren't there. 'She told me afterwards, 'I thought you hated me.' That kind of haunts you,' said Mathis, the girl's mother. A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl. Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. 'I wish that was treated as a teachable moment, not a law enforcement moment,' said Patterson. Private student chats face unexpected scrutiny Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida. One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat's automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours. Advertisement Alexa Manganiotis, 16, said she was startled by how quickly monitoring software works. West Palm Beach's Dreyfoos School of the Arts, which she attends, last year piloted Lightspeed Alert, a surveillance program. Interviewing a teacher for her school newspaper, Alexa discovered two students once typed something threatening about that teacher on a school computer, then deleted it. Lightspeed picked it up, and 'they were taken away like five minutes later,' Alexa said. Teenagers face steeper consequences than adults for what they write online, Alexa said. 'If an adult makes a super racist joke that's threatening on their computer, they can delete it, and they wouldn't be arrested,' she said. Amy Bennett, chief of staff for Lightspeed Systems, said that the software helps understaffed schools 'be proactive rather than punitive' by identifying early warning signs of bullying, self-harm, violence or abuse. The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others. 'A really high number of children who experience involuntary examination remember it as a really traumatic and damaging experience — not something that helps them with their mental health care,' said Sam Boyd, an attorney with the Southern Poverty Law Center. The Polk and West Palm Beach school districts did not provide comments. Advertisement An analysis shows a high rate of false alarms Information that could allow schools to assess the software's effectiveness, such as the rate of false alerts, is closely held by technology companies and unavailable publicly unless schools track the data themselves. Gaggle alerted more than 1,200 incidents to the Lawrence, Kansas, school district in a recent 10-month period. But almost two-thirds of those alerts were deemed by school officials to be nonissues — including over 200 false alarms from student homework, according to an Associated Press analysis of data received via a public records request. Students in one photography class were called to the principal's office over concerns Gaggle had detected nudity. The photos had been automatically deleted from the students' Google Drives, but students who had backups of the flagged images on their own devices showed it was a false alarm. District officials said they later adjusted the software's settings to reduce false alerts. Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend's college essay because it had the words 'mental health.' 'I think ideally we wouldn't stick a new and shiny solution of AI on a deep-rooted issue of teenage mental health and the suicide rates in America, but that's where we're at right now,' Torkzaban said. She was among a group of student journalists and artists at Lawrence High School who filed a lawsuit against the school system last week, alleging Gaggle subjected them to unconstitutional surveillance. School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence. Advertisement 'Sometimes you have to look at the trade for the greater good,' said Board of Education member Anne Costello in a July 2024 board meeting. Two years after their ordeal, Mathis said her daughter is doing better, although she's still 'terrified' of running into one of the school officers who arrested her. One bright spot, she said, was the compassion of the teachers at her daughter's alternative school. They took time every day to let the kids share their feelings and frustrations, without judgment. 'It's like we just want kids to be these little soldiers, and they're not,' said Mathis. 'They're just humans.'