
A new headache for honest students: proving they didn't use A.I.
But Burrell's submission was not, in fact, the instantaneous output of a chatbot. According to Google Docs editing history that was reviewed by The New York Times, she had drafted and revised the assignment over the course of two days. It was flagged anyway by a service offered by the plagiarism-detection company Turnitin that aims to identify text generated by artificial intelligence.
Advertisement
Panicked, Burrell appealed the decision. Her grade was restored after she sent a 15-page PDF of time-stamped screenshots and notes from her writing process to the chair of her English department.
Get Love Letters: The Newsletter
A weekly dispatch with all the best relationship content and commentary – plus exclusive content for fans of Love Letters, Dinner With Cupid, weddings, therapy talk, and more.
Enter Email
Sign Up
Still, the episode made her painfully aware of the hazards of being a student -- even an honest one -- in an academic landscape distorted by AI cheating.
Generative AI tools including ChatGPT are reshaping education for the students who use them to cut corners. According to a Pew Research survey conducted last year, 26% of teenagers said they had used ChatGPT for schoolwork, double the rate of the previous year. Student use of AI chatbots to compose essays and solve coding problems has sent teachers scrambling for solutions.
Advertisement
But the specter of AI misuse, and the imperfect systems used to root it out, may also be affecting students who are following the rules. In interviews, high school, college and graduate students described persistent anxiety about being accused of using AI on work they had completed themselves -- and facing potentially devastating academic consequences.
In response, many students have imposed methods of self-surveillance that they say feel more like self-preservation. Some record their screens for hours at a time as they do their schoolwork. Others make a point of composing class papers using only word processors that track their keystrokes closely enough to produce a detailed edit history.
The next time Burrell had to submit an assignment for the class in which she had been accused of using AI, she uploaded a 93-minute YouTube video documenting her writing process. It was annoying, she said, but necessary for her peace of mind.
'I was so frustrated and paranoid that my grade was going to suffer because of something I didn't do,' she said.
These students' fears are borne out by research reported in The Washington Post and Bloomberg Businessweek indicating that AI-detection software, a booming industry in recent years, often misidentifies work as AI-generated.
A new study of a dozen AI-detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as AI-generated about 6.8% of the time, on average.
'At least from our analysis, current detectors are not ready to be used in practice in schools to detect AI plagiarism,' said Soheil Feizi, an author of the paper and an associate professor of computer science at Maryland.
Advertisement
Turnitin, which was not included in the analysis, said in 2023 that its software mistakenly flagged human-written sentences about 4% of the time. A detection program from OpenAI that had a 9% false-positive rate, according to the company, was discontinued after six months.
Turnitin did not respond to requests for comment for this article, but has said that its scores should not be used as the sole determinant of AI misuse.
'We cannot mitigate the risk of false positives completely given the nature of AI writing and analysis, so, it is important that educators use the AI score to start a meaningful and impactful dialogue with their students in such instances,' Annie Chechitelli, Turnitin's chief product officer, wrote in a 2023 blog post.
Some students are mobilizing against the use of AI-detection tools, arguing that the risk of penalizing innocent students is too great. More than 1,000 people have signed an online petition started by Kelsey Auman last month, one of the first of its kind, that calls on the University at Buffalo in New York to disable its AI-detection service.
A month before her graduation from the university's master of public health program, Auman was told by a professor that three of her assignments had been flagged by Turnitin. She reached out to other members of the 20-person course, and five told her that they had received similar messages, she recalled in a recent interview. Two said that their graduations had been delayed.
Auman, 29, was terrified she would not graduate. She had finished her undergraduate studies well before ChatGPT arrived on campuses, and it had never occurred to her to stockpile evidence in case she was accused of cheating using generative AI.
Advertisement
'You just assume that if you do your work, you're going to be fine -- until you aren't,' she said.
Auman said she was concerned that AI-detection software would punish students whose writing fell outside 'algorithmic norms' for reasons that had nothing to do with artificial intelligence. In a 2023 study, Stanford University researchers found that AI-detection services were more likely to misclassify the work of students who were not native English speakers. (Turnitin has disputed those findings.)
After Auman met with her professor and exchanged lengthy emails with the school's Office of Academic Integrity, she was notified that she would graduate as planned, without any charges of academic dishonesty.
'I'm just really glad I'm graduating,' she said. 'I can't imagine living in this feeling of fear for the rest of my academic career.'
John Della Contrada, a spokesperson for the University at Buffalo, said that the school was not considering discontinuing its use of Turnitin's AI-detection service in response to the petition.
'To ensure fairness, the university does not rely solely on AI-detection software when adjudicating cases of alleged academic dishonesty,' he wrote in an email, adding that the university guaranteed due process for accused students, a right to appeal and remediation options for first-time offenders. (Burrell's school, the University of Houston-Downtown, warns faculty members that plagiarism detectors including Turnitin 'are inconsistent and can easily be misused,' but still makes them available.)
Other schools have determined that detection software is more trouble than it is worth: The University of California, Berkeley; Vanderbilt; and Georgetown have all cited reliability concerns in their decisions to disable Turnitin's AI-detection feature.
'While we recognize that AI detection may give some instructors peace of mind, we've noticed that overreliance on technology can damage a student-and-instructor relationship more than it can help it,' Jenae Cohn, executive director of the Center for Teaching and Learning at UC Berkeley, wrote in an email.
Advertisement
Sydney Gill, an 18-year-old high school senior in San Francisco, said she appreciated that teachers were in an extremely difficult position when it came to navigating an academic environment jumbled by AI. She added that she had second-guessed her writing ever since an essay she entered in a writing competition in late 2023 was wrongly marked as AI generated.
That anxiety persisted as she wrote college application essays this year. 'I don't want to say it's
life-changing
, but it definitely altered how I approach all of my writing in the future," she said.
In 2023, Kathryn Mayo, a professor at Cosumnes River College in Sacramento, California, started using AI-detection tools from Copyleaks and Scribbr on essays from students in her photo-history and photo-theory classes. She was relieved, at first, to find what she hoped would be a straightforward fix in a complex and frustrating moment for teachers.
Then she ran some of her own writing through the service, and was notified that it had been partly generated using AI. 'I was so embarrassed,' she said.
She has since changed some of her assignment prompts to make them more personal, which she hopes will make them harder for students to outsource to ChatGPT. She tries to engage any student whom she seriously suspects of misusing AI in a gentle conversation about the writing process.
Sometimes students sheepishly admit to cheating, she said. Other times, they just drop the class.
This article originally appeared in
Advertisement
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Why Shopify Stock Popped Today
A Wells Fargo analyst raised their price target on Shopify's stock today. They called the company their "signature pick" and believe it could be a lesser-known beneficiary of the AI revolution. Home to its Shopify Magic and Sidekick solutions, Shopify looks more poised to thrive amid AI's rise, rather than be disrupted by it. 10 stocks we like better than Shopify › Shares of North America's leading commerce enabler, Shopify (NASDAQ: SHOP), were 6% higher as of 3:30 p.m. ET Friday, according to data provided by S&P Global Market Intelligence. This rise stems from an analyst at Wells Fargo raising their price target on the stock from $107 to $125, naming Shopify a "signature pick." Although Nvidia, Palantir, OpenAI, and others capture most of the artificial intelligence (AI) fanfare, the analyst believes Shopify could prove to be a thematic AI story -- and I'd agree. In April this year, a leaked memo from Chief Executive Officer Tobi Lütke went viral. In it, he stated, "Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI." Although this focuses on incorporating AI into Shopify's operations, the quote highlights that AI will be at the heart of what the company does going forward, whether internally or through its products. In 2023, the company launched Shopify Magic, a toolbox of AI-powered solutions (think AI-generated product descriptions or email campaigns, automated chat help, or image editing). Then, it launched Sidekick -- an AI-driven commerce assistant -- in 2024 to help with areas such as inventory optimization, pricing strategies, and gathering business insights. Just last quarter, Shopify launched This tool enables merchants to source products more effectively, allowing them to navigate the complex tariff environment in real time. Adding over 600 new product features for its merchants in the last two years alone, Shopify appears likely to remain an AI innovator rather than a disruptee, in my opinion. Though Shopify stock isn't cheap at 83 times cash from operations, its growth potential remains massive, holding only a 2% market share in its core geographies. Before you buy stock in Shopify, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Shopify wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $674,395!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $858,011!* Now, it's worth noting Stock Advisor's total average return is 997% — a market-crushing outperformance compared to 172% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Wells Fargo is an advertising partner of Motley Fool Money. Josh Kohn-Lindquist has positions in Nvidia and Shopify. The Motley Fool has positions in and recommends Nvidia, Palantir Technologies, and Shopify. The Motley Fool has a disclosure policy. Why Shopify Stock Popped Today was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
2 hours ago
- Forbes
Cognitive Cities Are Rising To Define The Urban Future
Cities, where almost 60 percent of all humans now live, often struggle with a long list of issues that include traffic congestion, inefficient public services, high carbon emissions, economic and public safety challenges, and aging water and energy systems. As a result, there's a large and growing demand for novel solutions. It won't come as a surprise that new technologies are playing an increasingly important role in addressing a wide range of urban needs. The term smart city, which first began to appear in the 1990s, is often used to describe an urban area that adopts innovative digital technologies, data, sensors, and connectivity to improve a community's livability, workability, and sustainability. The smart city movement has had plenty of successes (and their fair share of failures and backlash), and public agencies committed to the use of innovative technologies and data to drive better governance can be found in every part of the world. Now a new concept is emerging that builds upon the success and limitations of smart cities. It's called the cognitive city and it's when AI, used in conjunction with other related emerging technologies, creates a more intelligent, responsive, and adaptable urban experience. This shift is unsurprising. It's happening as the intelligence age drives the emergence of a cognitive industrial revolution, an economic transformation that is forcing every organization to make sense of and see the opportunities in a world of thinking machines. At their core, cognitive cities are AI-powered and data-driven. They use these technologies and others to understand patterns in the urban space to help with decision-making, planning, and governance, and to power innovative urban solutions. Instead of being reactive, the aim is for city services to be proactive by anticipating needs and challenges. Over time, the city learns about its community, helping it to evolve to meet current and future needs. This may all sound a little too abstract, so let's put it in perspective by exploring two cognitive cities being constructed right now. Perhaps the most famous cognitive city underway is in the northwestern region of the Kingdom of Saudi Arabia. Called NEOM, this area includes The Line. Instead of being built in a traditional radial shape, The Line is a long, narrow strip, proposed to be 106 miles in length, 656 feet in width, and 1640 feet in height. Advanced cognitive technologies are at the heart of this city, enabling the optimization of transportation, resource management, and energy consumption—it will all be non-carbon based. The city is being designed to understand residents' needs and support personalized and proactive services such as healthcare, activity scheduling, and temperature management. The city of Aion Sentia, underway in Abu Dhabi in the United Arab Emirates, has even bolder aspirations. It's being designed to anticipate even more resident needs. If you like to buy a latte from your favorite coffee store each day at 8am, it's going to be ready for you. If you have an anniversary upcoming, you'll be reminded, and reservations will automatically be made at your favorite restaurant. Central to this cognitive city will be a city-provided app that will be your urban assistant. For example, if you get an energy bill that is higher than expected, you'll be able to tell the app, and it will figure out what you need to do to reduce your energy use. Feeling ill, the app will make a medical appointment and take care of all the related logistics. Other cities embracing the cognitive city concept include Woven in Japan, Songdo in South Korea, and Telosa in the United States. This may all sound rather futuristic, and it is. Much of it has yet to be built and proven. The concept of cognitive cities has some significant challenges related to privacy and the extent to which residents even want automation is every aspect of their lives. Toronto's proposed urban project, Sidewalk, haunts both the city and the developers, and is a litmus test for cognitive technology use, as issues surrounding privacy and data contributed greatly to its abandonment. In the marketplace of ideas, communities will need to balance the benefits of an AI-powered urban future versus the concerns and risks they present. These questions and others won't be second order issues but will need to be addressed as priorities as we enter the era of cognitive cities.

Business Insider
2 hours ago
- Business Insider
AI leaders have a new term for the fact that their models are not always so intelligent
As academics, independent developers, and the biggest tech companies in the world drive us closer to artificial general intelligence — a still hypothetical form of intelligence that matches human capabilities — they've hit some roadblocks. Many emerging models are prone to hallucinating, misinformation, and simple errors. Google CEO Sundar Pichai referred to this phase of AI as AJI, or "artificial jagged intelligence," on a recent episode of Lex Fridman's podcast. "I don't know who used it first, maybe Karpathy did," Pichai said, referring to deep learning and computer vision specialist Andrej Karpathy, who cofounded OpenAI before leaving last year. AJI is a bit of a metaphor for the trajectory of AI development — jagged, marked at once by sparks of genius and basic mistakes. In a 2024 X post titled "Jagged Intelligence," Karpathy described the term as a "word I came up with to describe the (strange, unintuitive) fact that state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggle with some very dumb problems." He then posted examples of state of the art large language models failing to understand that 9.9 is bigger than 9.11, making "non-sensical decisions" in a game of tic-tac-toe, and struggling to count. The issue is that unlike humans, "where a lot of knowledge and problem-solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood," the jagged edges of AI are not always clear or predictable, Karpathy said. Pichai echoed the idea. "You see what they can do and then you can trivially find they make numerical errors or counting R's in strawberry or something, which seems to trip up most models," Pichai said. "I feel like we are in the AJI phase where dramatic progress, some things don't work well, but overall, you're seeing lots of progress." In 2010, when Google DeepMind launched, its team would talk about a 20-year timeline for AGI, Pichai said. Google subsequently acquired DeepMind in 2014. Pichai thinks it'll take a little longer than that, but by 2030, "I would stress it doesn't matter what that definition is because you will have mind-blowing progress on many dimensions." By then the world will also need a clear system for labeling AI-generated content to "distinguish reality," he said. "Progress" is a vague term, but Pichai has spoken at length about the benefits we'll see from AI development. At the UN's Summit of the Future in September 2024, he outlined four specific ways that AI would advance humanity — improving access to knowledge in native languages, accelerating scientific discovery, mitigating climate disaster, and contributing to economic progress.