logo
AI cheating surge pushes schools into chaos

AI cheating surge pushes schools into chaos

Axios26-05-2025

High schools and colleges are stuck in limbo: Use of generative AI to cut corners and cheat is rampant, but there's no clear consensus on how to fight back.
Why it matters: AI is here to stay, forcing educators to adapt.
That means sussing out when students are using it — and avoiding the temptation of overusing it themselves.
"I have to be a teacher and an AI detector at the same time," says Stephen Cicirelli, an English professor at Saint Peter's University in Jersey City, N.J. "[Any assignment] that you take home and have time to play around with, there's going to be doubt hanging over it."
Cicirelli captured the zeitgeist with a viral post on X about how one of his students got caught submitting an AI-written paper — and apologized with an email that also appeared to be written by ChatGPT.
"You're coming to me after to apologize and do the human thing and ask for grace," he says. "You're not even doing that yourself?"
By the numbers: Use is ubiquitous in college. A survey of college students taken in January 2023, just two months after ChatGPT's launch, found that some 90% had already used it on assignments, New York Magazine reports.
1 in 4 13- to 17-year-olds say they use ChatGPT for help with schoolwork, per a recent Pew survey. That's double what it was in 2023.
Driving the news: The proliferation of AI-assisted schoolwork is worrying academic leaders.
66% think generative AI will cut into students' attention spans, according to a survey of university presidents, chancellors, deans and more from the American Association of Colleges & Universities and Elon University's Imagining the Digital Future Center.
59% say cheating has increased on campus.
56% say their schools aren't ready to prepare students for the AI era.
"It's an undeniable and unavoidable disruption," says Lee Rainie, director of Elon's digital future center. "You can't avert your eyes."
One big snag: Teachers can't agree on what's acceptable in this new world.
For example, 51% of higher education leaders say it's fine for a student to write a paper off a detailed outline generated by AI, while the rest say it's not or they don't know, per the AAC&U and Elon survey.
Policies vary from classroom to classroom within the same school.
Plus, the rise of AI is causing unforeseen headaches.
Teachers run assignments through detectors, which often don't get it right, either missing AI-generated work or mistakenly flagging original work as written by AI. Students who didn't use AI have had to appeal to their schools or submit proof of their process to avoid getting zeroes, The New York Times reports.
Instructors are getting caught leaning on ChatGPT, too. One Northeastern senior demanded tuition reimbursement after discovering her professor had used AI to prep lecture notes and slides, according to The New York Times.
The other side: As much as they're struggling to wrangle AI use, many educators believe it has the potential to help students — and that schools should be teaching them how to use it.
American University's business school is launching an AI institute for just that purpose. "When 18-year-olds show up here as first-years, we ask them, 'How many of your high school teachers told you not to use AI?' And most of them raise their hand," David Marchick, the dean of American University's Kogod School of Business, told Axios' Megan Morrone. "We say, 'Here, you're using AI, starting today.'"
ChatGPT can be a real-time editor and refine students' writing or speed up research so they can focus on organizing big ideas instead of information gathering, Jeanne Beatrix Law, an English professor at Kennesaw State University, writes in The Conversation.
"Don't block AI ... Instead, let's put in place some of the same safety and wellness protocols that it took us a decade to build for social media and web 1.0," says Tammy Wincup, CEO of Securly, a software company that builds safety tools for K-12 schools.
What to watch: "There is a gigantic question across academic institutions right now," Rainie tells Axios. "How do you assess mastery?"
Cicirelli says he's asking students to draft their work in Google Docs so he can see the brainstorming and writing process.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally
MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally

Yahoo

time29 minutes ago

  • Yahoo

MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally

MongoDB (NASDAQ:MDB) is off to a fast start in fiscal 2026and investors might want to take a closer look. The company reported $549 million in Q1 revenue, up 22% from last year, with its cloud product, Atlas, growing 26% and now making up 72% of total sales. Management added 2,600 new customers, marking the biggest quarterly gain in six years. The share is up 14.2% at 12.09pm today. CEO Dev Ittycheria pointed to strong traction from both enterprises and startups as AI workloads and modern app development continue to drive demand for flexible, cloud-native databases. Behind the scenes, MongoDB is becoming a cash machine. The company more than doubled non-GAAP operating income to $87.4 million and posted $105.9 million in free cash flowup 74% year-over-year. With $2.5 billion in cash and short-term investments on hand, it just authorized another $800 million in share repurchases, taking its total buyback program to $1 billion. That kind of financial firepower could give MongoDB more room to support long-term growth while returning capital to shareholders. On the AI front, MongoDB isn't just playing defense. It rolled out new retrieval modelsVoyage 3.5 and 3.5 Litethat improve accuracy and efficiency for building AI-powered apps. It also debuted its Model Context Protocol Server, which connects MongoDB to tools like GitHub Copilot and Anthropic's Claude, letting developers use natural language to interact with their data. With FY2026 revenue guidance raised up to $2.29 billion and full-year non-GAAP EPS projected to hit as high as $3.12, MongoDB could be shaping up as a quiet leader in the AI infrastructure race. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Anthropic launches new Claude service for military and intelligence use
Anthropic launches new Claude service for military and intelligence use

The Verge

time40 minutes ago

  • The Verge

Anthropic launches new Claude service for military and intelligence use

Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information. The company said the models it's announcing 'are already deployed by agencies at the highest level of U.S. national security,' and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use. Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic's blog post. And although the company said they 'underwent the same rigorous safety testing as all of our Claude models,' the models have certain specifications for national security work. For example, they 'refuse less when engaging with classified information' that's fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov's models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There's been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there's also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement. Anthropic's usage policy specifically dictates that any user must 'Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,' including using Anthropic's products or services to 'produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.' At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are 'carefully calibrated to enable beneficial uses by carefully selected government agencies.' Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to 'tailor use restrictions to the mission and legal authorities of a government entity,' although it will aim to 'balance enabling beneficial uses of our products and services with mitigating potential harms.' Claude Gov is Anthropic's answer to ChatGPT Gov, OpenAI's product for U.S. government agencies, which it launched in January. It's also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape. When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir's FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it's expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.

No, AI Robots Won't Take All Our Jobs
No, AI Robots Won't Take All Our Jobs

Wall Street Journal

time42 minutes ago

  • Wall Street Journal

No, AI Robots Won't Take All Our Jobs

Anthropic CEO Dario Amodei said last week that artificial intelligence could eliminate half of all entry-level white-collar jobs within five years and cause unemployment to skyrocket to as high as 20%. He should know better—as should many other serious academics, who have been warning for years that AI will mean the end of employment as we know it. In 2013 Carl Benedikt Frey and Michael A. Osborne of Oxford University produced a research paper estimating that 47% of U.S. employment was at risk of being eliminated by new technologies.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store