logo
You Can Detect AI Writing With These Tips

You Can Detect AI Writing With These Tips

CNET18-07-2025
As long as there's been learning in the world, there's been cheating. But it's evolved in 2025, and students who would once have completed their assignment using someone else's work can now just use an AI writing tool for free in 30 seconds flat. There's no need to shell out cash for a shady essay-writing services where an unscrupulous person writes 1,200 words for you on the fall of the Roman Empire when you've got free access to AI.
As a professor of strategic communications, I encounter students using AI tools like ChatGPT, Grammarly and EssayGenius on a regular basis. It's usually easy to tell when a student has used one of these tools to draft their entire work. The telltale signs include ambiguous language and a super annoying tendency for AI to spit out text with the assignment prompt featured broadly.
How to tell if it was written by AI
Some of the most common ways to tell if something was written using AI are:
Key terms from your assignment prompt are used repeatedly.
Inaccurate facts are included, thanks to the AI chatbot hallucinating.
Sentences don't sound natural.
Explanations are generic and repetitive, rather than actually leading anywhere.
The tone doesn't sound like their usual writing style.
For example, a student might use ChatGPT -- an AI chatbot that uses large language model learning and a conversational question and answer format to provide query results -- to write a short essay response to a prompt by simply copying and pasting the essay question into the tool.
Take this prompt: In 300 words or fewer, explain how this SWAT and brand audit will inform your final pitch.
This is ChatGPT's result:
Screenshot by Rachel Kane/CNET
I have received responses like this, or those very close to it, a few times in my tenure as a teacher, and one of the most recognizable red flags is the amount of instances in which key terms from the prompt are used in the final product.
Students don't usually repeat key terms from the prompt in their work in this way, and the results read closer to old-school SEO-driven copy meant to define these terms rather than a unique essay meant to demonstrate an understanding of the subject matter.
But can teachers use AI tools to catch students using AI tools? I devised some ways to be smarter in spotting artificial intelligence in papers.
Catch AI cheaters
Here's how to use AI tools to catch cheaters in your class.
Understand AI capabilities
There are AI tools on the market that can scan an assignment and its grading criteria to provide a fully written, cited and complete piece of work in a matter of moments. Some of these tools include GPTZero and Smodin. Familiarizing yourself with tools like these is the first step in the war against AI-driven integrity violations.
Do as the cheaters do
Before the semester begins, copy and paste all your assignments into a tool like ChatGPT and ask it to do the work for you. When you have an example of the type of results it provides specifically in response to your assignments, you'll be better equipped to catch AI-written answers. You could also use a tool designed specifically to spot AI writing in papers.
Get a real sample of writing
At the beginning of the semester, require your students to submit a simple, fun and personal piece of writing to you. The prompt should be something like "200 words on what your favorite toy was as a child," or "Tell me a story about the most fun you ever had." Once you have a sample of the student's real writing style in hand, you can use it later to have an AI tool review that sample against what you suspect might be AI-written work.
Ask for a rewrite
If you suspect a student of using AI to cheat on their assignment, take the submitted work and ask an AI tool to rewrite the work for you. In most cases I've encountered, an AI tool will rewrite its own work in the laziest manner possible, substituting synonyms instead of changing any material elements of the "original" work.
Here's an example:
Screenshot by Rachel Kane/CNET
Screenshot by Rachel Kane/CNET
Now, let's take something an actual human (me) wrote, my CNET bio:
Screenshot by Rachel Kane/CNET
The phrasing is changed, extracting much of the soul in the writing and replacing it with sentences that are arguably more clear and straightforward. There are also more additions to the writing, presumably for further clarity.
Can you always tell if something was written using AI?
The most important part about catching cheaters who use AI to do their work is having a reasonable amount of evidence to show the student and the administration at your school if it comes to that. Maintaining a skeptical mind when grading is vital, and your ability to demonstrate ease of use and understanding with these tools will make your case that much stronger.
Good luck out there in the new AI frontier, fellow teachers, and try not to be offended when a student turns in work written by a robot collaborator. It's up to us to make the prospect of learning more alluring than the temptation to cheat.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mark Zuckerberg's Meta pays ‘tens of millions of dollars' to poach top Apple AI exec — adding to murderers' row of new hires
Mark Zuckerberg's Meta pays ‘tens of millions of dollars' to poach top Apple AI exec — adding to murderers' row of new hires

Yahoo

timean hour ago

  • Yahoo

Mark Zuckerberg's Meta pays ‘tens of millions of dollars' to poach top Apple AI exec — adding to murderers' row of new hires

Mark Zuckerberg's Meta has reportedly forked over 'tens of millions of dollars' to poach one of Apple's top artificial intelligence researchers as the tech giant continued to add to a murderers' row of high-paid talent. Ruoming Pang, who led the team responsible for developing Apple's AI models, will become the latest member to join Meta's new 'Superintelligence Lab,' Bloomberg reported, citing sources with knowledge of the matter. Meta reportedly lured Pang, who had worked at Apple since 2021, with a compensation package 'worth tens of millions of dollars per year,' the sources said. The company also recently hired away researchers Yuanzhi Li from OpenAI and Anton Bakhtin from Anthropic. In all, Meta has poached more than a dozen top AI researchers since last week, purportedly offering compensation packages worth $100 million or more to win the AI arms race – meaning the company's total spending on hires could soon surpass $1 billion, if it hasn't already. At least nine of the hires jumped ship from Sam Altman's OpenAI, with the others coming from Google DeepMind and Amazon-backed Anthropic. The new hires will be part of the the newly formed Meta Superintelligence Labs, headed by former Scale AI CEO Alexandr Wang. Late last month, Zuckerberg announced that Wang came aboard after Meta invested nearly $15 billion for a 49% stake in the startup. Other key hires include former GitHub CEO Nat Friedman, ex-Safe Superintelligence CEO Daniel Gross and former OpenAI researcher Trapit Bansal, who played a key role in developing the ChatGPT maker's AI reasoning models. 'As the pace of AI progress accelerates, developing superintelligence is coming into sight,' Zuckerberg said in an internal message to employees on June 30. The announcement helped push Meta's stock to an all-time high. Meta confirmed the hire but declined further comment. Apple did not immediately respond. Meta's tactics have miffed Altman, who has publicly grumbled about his billionaire rival targeting OpenAI's employees with exorbitant packages. Top Meta executive Andrew Bosworth reportedly pushed back during a recent all-hands meeting, telling employees that Altman was being 'dishonest' about the extent of the offers. At the same time, Meta denied a report from the tech news site Wired that it had offered up to $300 million to some AI talent – numbers that, if true, would dwarf the annual pay of some of the world's top tech executives. 'Some people have chosen to greatly exaggerate what's happening for their own purposes,' Meta spokesperson Andy Stone said at the time. Meanwhile, the loss of Pang was another setback for Apple, which has struggled to integrate new AI features for its iPhones and other hardware. Pang oversaw roughly 100 employees at Apple. During Apple's Worldwide Developers Conference last month, the company confirmed that its long-teased AI overhaul of the Siri voice assistant still needed more work before it could be released to the public.

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'
Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

Yahoo

timean hour ago

  • Yahoo

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

Mark Zuckerberg's Meta gave a 24-year-old artificial intelligence whiz a staggering $250 million compensation package, raising the bar in the recruiting wars for top talent — while also raising questions about economic inequality in an AI-dominated future. Matt Deitke, who recently dropped out of a computer science doctoral program at the University of Washington, initially turned down Zuckerberg's 'low-ball' offer of approximately $125 million over four years, according to the New York Times. But when the Facebook founder, a former whiz kid himself, met with Deitke and doubled the offer to roughly $250 million — with potentially $100 million paid in the first year alone — the young researcher accepted what may be one of the largest employment packages in corporate history, the Times reported. 'When computer scientists are paid like professional athletes, we have reached the climax of the 'Revenge of the Nerds!'' Professor David Autor, an economist at MIT, told The Post on Friday. Deitke's journey illustrates how quickly fortunes can be made in AI's limited talent pool. After leaving his doctoral program, he worked at Seattle's Allen Institute for Artificial Intelligence, where he led the development of Molmo, an AI chatbot capable of processing images, sounds, and text — exactly the type of multimodal system Meta is pursuing. In November, Deitke co-founded Vercept, a startup focused on AI agents that can autonomously perform tasks using internet-based software. With approximately 10 employees, Vercept raised $16.5 million from investors including former Google CEO Eric Schmidt. His groundbreaking work on 3D datasets, embodied AI environments and multimodal models earned him widespread acclaim, including an Outstanding Paper Award at NeurIPS 2022. The award, one of the highest accolades in the AI research community, is handed out to around a dozen researchers out of more than 10,000 submissions. The deal to lock up Deitke underscores Meta's aggressive push to compete in artificial intelligence. Meta has reportedly paid out more than $1 billion to build an all-star roster, including luring away Ruoming Pang, former head of Apple's AI models team, to join its Superintelligence Labs team with a compensation package reportedly worth more than $200 million. The company said capital expenditures will go up to $72 billion for 2025, an increase of approximately $30 billion year-over-year, in its earnings report Wednesday. While proponents argue that competition drives innovation, critics worry about the concentration of power among a few companies and individuals capable of shaping AI's development. Ramesh Srinivasan, a professor of Information Studies and Design/Media Arts at UCLA and founder of the university's Digital Cultures Lab, said the direction that companies like Meta are taking with artificial intelligence is 'foundational to why our economy is becoming more unequal by the day.' 'These firms are awarding hundreds of millions of dollars to a handful of elite researchers while simultaneously laying off thousands of workers—many of whom, like content moderators, are not even classified as full employees,' Srinivasan told the New York Post. 'These are the very jobs Meta and similar companies intend to replace with the AI systems they're aggressively developing.' Srinivasan, who advises US policymakers on technology policy and has written extensively on the societal impact of AI, said this model of development rewards those advancing large language models while 'displacing and disenfranchising the workers whose labor, ironically, generated the data powering those models in the first place.' 'This is cognitive task automation,' he said. 'It's HR, administrative work, paralegal work — even driving for Uber. If data can be collected on a job, it can be mimicked by a machine. All of those forms of income are on the chopping block.' Asked whether universal basic income might address mass displacement, Srinivasan, who hosts the Utopias podcast, called it 'highly insufficient.' 'Yes, UBI gives people money, but it doesn't address the fundamental issue: no one is being paid for the data that makes these AI systems possible,' he said. On Wednesday, Zuckerberg told investors on the company's earnings call: 'We're building an elite, talent-dense team. If you're going to be spending hundreds of billions of dollars on compute and building out multiple gigawatt of clusters, then it really does make sense to compete super hard and do whatever it takes to get that, you know, 50 or 70 or whatever it is, top researchers to build your team.' 'There's just an absolute premium for the best and most talented people.' A Meta spokesperson referred The Post to Zuckerberg's comments to investors. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Nvidia must provide 'security proofs' to regain trust:China state media
Nvidia must provide 'security proofs' to regain trust:China state media

Yahoo

timean hour ago

  • Yahoo

Nvidia must provide 'security proofs' to regain trust:China state media

BEIJING (Reuters) -Nvidia (NVDA) must produce "convincing security proofs" to eliminate Chinese users' worries over security risks in its chips and regain market trust, a commentary published by China's state-run media People's Daily said on Friday. Foreign companies must comply with Chinese laws and take security to be a basic prerequisite, said the commentary - titled "Nvidia, how can I trust you?" - which was published on the paper's social media account. In a statement sent to Reuters, an Nvidia spokesperson reiterated that "Cybersecurity is critically important to us". "NVIDIA does not have 'backdoors' in our chips that would give anyone a remote way to access or control them," the spokesperson said. The commentary appeared a day after Beijing raised concerns over potential security risks in Nvidia's H20 artificial intelligence chip, casting uncertainty over the company's sales prospects in China weeks after a U.S. export ban was reversed. The Cyberspace Administration of China, the country's internet regulator, said it was concerned by a U.S. proposal for advanced chips sold abroad to be equipped with tracking and positioning functions. The regulator said it had summoned Nvidia to a meeting to explain whether its H20 AI chip had any backdoor security risks, as it was worried that Chinese user data and privacy rights could be affected. A backdoor risk refers to a hidden method of bypassing normal authentication or security controls.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store