Hypocrite Teachers Are Telling Students Not to Use AI While Using It to Grade Their Work
As the New York Times reports, educators and administrators are using AI for grading papers, tutoring students, and even analyzing their records.
For Rhode Island history teacher Jon Gold, using ChatGPT for lesson plans seems like a relatively acceptable use case, and tells his classes about how he uses it for transparency. But at the same time, he tells his students not to use it themselves for essays or research.
"I am more pro-AI-literacy than I am pro-AI-use," Gold told the NYT.
Jennifer Carolan, herself a former history teacher who founded the AI edtech investment company Reach Capital, insisted when speaking to the newspaper that the technology "is already being used by the majority of teachers and students." That informal figure may be a bit inflated, given that a recently-published survey from the RAND Corporation found that only 25 percent of teachers admitted to using AI for teaching or lesson planning — but because those responses came from the 2023-2024 school year, it's likely that share has grown.
The tension between teacher AI use versus students using it seems to be even stronger on the administrative level. For Washington, DC charter school administrator Alex Baron, for instance, students who use a Alphabet-owned apps like PhotoMath and Google Lens are cheating — but his own use of the technology is, apparently, legitimate.
Baron told the NYT that he often uses AI programs to analyze students' academic and behavioral data and split them into targeted groups for support. Though he didn't disclose which apps he used, the AI-forward administrator and others like him have a multitude of options at their disposal. Software called Securly Discern , for instance, claims it can help administrators identify and support "at-risk students" by monitoring their online behavior — in short, by spying on them and reporting back to the principal.
Regardless of whether they're using AI for good or for evil, students themselves aren't in love with the idea of their teachers capitalizing on the technologies they're barred from using. When asked by the NYT earlier this year how they felt about teachers using AI for grading — perhaps the most problematic use case because AI is constantly getting stuff wrong — several said they found the practice hypocritical and even insulting.
"If students are putting in work and effort, why should teachers be allowed to be lazy and let AI grade for them?" opined a high schooler named Aidan. "It gives a sense that, because I am older and I am your superior, I can use A.I. and you can't, which is not a good example for students."
A Sacramento teacher named LP, meanwhile, defended their use of AI.
"Teachers... have earned credentials and often multiple master's degrees," they wrote. "We know the generative skill of composition. If we can use AI to support grading, we can spend more time planning, prepping, and supporting students."
Obviously, it's a contentious question — and one that isn't likely to go away as AI becomes ever more normalized.
More on AI and education: High Schools Training Students for Manual Labor as AI Looms Over College and Jobs

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
Former Google Executive Says AI Job Creation Claim Is '100% Crap' And Warns 'CEOs Will Be Replaced' As Machines Outperform Humans At Nearly Everything
AI is rewriting résumés before people even graduate, as students try to steer toward "AI-proof" careers. But according to former Google X Chief Business Officer Mo Gawdat, the sunny idea that AI will replace drudgery with shiny new jobs is, in his words, "100% crap." Gawdat, who spent years at Google's secretive "moonshot factory" developing futuristic projects like self-driving cars and internet-beaming balloons, joined "The Diary of a CEO" podcast to dismantle the rosy talking points tech leaders keep repeating. "The best at any job will remain," he said, pointing to elite software developers who deeply understand architecture and technology. "The best... will stay — for a while." That pause matters. Gawdat believes even top performers will eventually feel the pressure once artificial general intelligence — a level of AI that can perform any intellectual task a human can — hits its stride. Don't Miss: The same firms that backed Uber, Venmo and eBay are investing in this pre-IPO company disrupting a $1.8T market — 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can And the threat isn't limited to rank-and-file workers. Gawdat warned that executives patting themselves on the back for cutting staff with AI might want to check their own job security. "AGI is going to be better at everything than humans, including being a CEO," he said. AGI — short for artificial general intelligence — refers to AI capable of performing any intellectual task a human can, not just narrow, specialized skills like today's systems. "You really have to imagine that there will be a time where most incompetent CEOs will be replaced," he said. It's not exactly shocking that bad bosses could be swapped out — plenty of employees might call that an upgrade — but Gawdat's point is sharper: AI's reach is so broad it will challenge anyone who isn't the absolute best at what they do. If algorithms can outperform you, title or corner office won't save you. Trending: Bill Gates Warned About Water Scarcity. He's already seen the shift firsthand. His current AI startup, was built by three people. Without AI, he estimates it would have taken 350 developers. That's 347 jobs gone before the company even launched. The efficiency is staggering — and for most workers, unsettling. Gawdat predicts the real pain could start by 2027, describing a "short-term dystopia" where layoffs hit hard, inequality grows, and stability shakes. The so-called golden age of AI, with abundant wealth and universal health cures, may only arrive if humanity changes the incentives driving corporate decisions. "There's absolutely nothing wrong with AI — there's a lot wrong with the value set of humanity at the age of the rise of the machines," he said, making it clear the technology isn't inherently the villain. The problem is how people choose to use it. For now, his advice is simple: be exceptional or be replaceable. AI isn't coming just for entry-level roles or factory jobs. It's coming for your podcast, your code, your creative briefs — and yes, maybe even your CEO. Read Next: 2,000 High Earners Manage $6B With This AI Platform — Kevin O'Leary Says Real Estate's Been a Smart Bet for 200 Years — Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Former Google Executive Says AI Job Creation Claim Is '100% Crap' And Warns 'CEOs Will Be Replaced' As Machines Outperform Humans At Nearly Everything originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.


TechCrunch
16 minutes ago
- TechCrunch
Perplexity's bid for Google Chrome could be just the beginning
Perplexity, the AI search startup that hasn't even cracked 100 million monthly users, just made a $34.5 billion cash offer to buy Chrome from Google. The unsolicited bid comes as the DOJ prepares its remedy decision after ruling Google illegally maintained a search monopoly. The timing makes sense, but questions remain. Perplexity won't name its backers for the massive deal, and the offer is worth far more than the company has raised. The move fits Perplexity's broader push for distribution after earning plenty of buzz in tech circles but struggling with mainstream user adoption. The company has also explored acquiring Brave browser and launched its own AI-powered browser, Comet. Whether this Chrome play is serious or just a stunt is still up for debate. On Equity, we're revisiting a conversation with Neil Chilson, a lawyer, computer scientist and head of AI Policy at the Abundance Institute, to unpack what's at stake for Google in its search and ad tech battles, and how generative AI could reshape competition in the space. As always, Equity will be back for you next week, so don't miss it! Equity is TechCrunch's flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday. Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.


Newsweek
17 minutes ago
- Newsweek
Outrage Grows After Meta Admits AI Guidelines Let Chatbots Flirt With Kids
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Meta, the parent company of Facebook, Instagram and WhatsApp, is revising policies that allowed chatbots to engage in "romantic or sensual" conversations with children following an explosive investigative report, company officials said Friday. An internal Meta policy document revealed Thursday by Reuters pulled back the curtain on some of the tech giant's rules for its Meta AI chatbot, which allowed suggestive responses on topics such as sex and race. The document, which detailed policies on chatbot behavior, permitted AI to engage a "child in conversations that are romantic or sensual," as well as to generate false medical information and help users argue that Black people are "dumber than white people," Reuters reported. Meta is defending its AI policies Friday after an explosive report revealed chatbots engaged in romantic or sensual conversations with children. Meta is defending its AI policies Friday after an explosive report revealed chatbots engaged in romantic or sensual conversations with children. Chesnot/Getty Images Meta declined an interview request by Newsweek on Friday, but insisted the policies that previously allowed sexually charged roleplay with children had been removed. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors," a Meta spokesperson said in a statement. "Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." Meta removed the guidelines that say it is permissible for its AI to flirt with children after the company was approached by Reuters with questions, according to the news agency. Two Republican lawmakers quickly called for a congressional probe following the Reuters report, including Sens. Josh Hawley and Marsha Blackburn, both Republicans. "So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it 'permissible for chatbots to flirt and engage in romantic roleplay with children,'" Hawley wrote Thursday on X. "This is grounds for an immediate congressional investigation." Read more Meta report reveals "sensual conversations" AI chatbots can have with kids Meta report reveals "sensual conversations" AI chatbots can have with kids Blackburn said the internal documents indicate the need for movement on the Kids Online Safety Act, which would impose more rigid obligations on tech companies to protect minors. The bill has passed the Senate, but remains stalled in the House. "Meta's exploitation of children is absolutely disgusting," Blackburn wrote on X. "This report is only the latest example of why Big Tech cannot be trusted to protect underage users when they have refused to do so time and time again." Senator Brian Schatz, a Democrat from Hawaii, slammed Meta as "disgusting and evil" while questioning how company officials had endorsed the policy prior to the change. "Meta chat bots that basically hits on kids — f--k that," Schatz wrote on X. "This is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this." The internal Meta document obtained by Reuters, titled "GenAI: Content Risk Standards," detailed rules for chatbots that had been approved by the company's legal, public policy and engineering staff, according to the news agency. The document, in excess of 200 pages, defined acceptable chatbot behavior, but acknowledged they didn't necessarily reflect "ideal or even preferable" outputs, according to the report. Facebook CEO Mark Zuckerberg departs a courthouse in Washington, D.C., on April 14 following the start of an antitrust trial against Meta over the company's acquisitions of Instagram and WhatsApp. Facebook CEO Mark Zuckerberg departs a courthouse in Washington, D.C., on April 14 following the start of an antitrust trial against Meta over the company's acquisitions of Instagram and WhatsApp."It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards stated. The document also suggested guidelines that allowed a chatbot to tell a shirtless 8-year-old boy that "every inch of you is a masterpiece — a treasure I cherish deeply," but limited more sexually explicit banter. "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')," the guidelines read, according to Reuters. Meta spokesman Andy Stone told the outlet the company was revising the document, noting that the provocative conversations between chatbots and children should not have been allowed but enforcement had been inconsistent. Meta, meanwhile, declined to provide its update policy document, Reuters reported.