
The professors are using ChatGPT, and some students aren't happy about it
In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor?
Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to 'expand on all areas. Be more detailed and specific.' It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example.
Stapleton texted a friend in the class.
'Did you see the notes he put on Canvas?' she wrote, referring to the university's software platform for hosting course materials. 'He made it with ChatGPT.'
'OMG Stop,' the classmate responded. 'What the hell?'
Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of artificial intelligence: distorted text, photos of office workers with extraneous body parts and egregious misspellings.
She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade 'academically dishonest activities,' including the unauthorized use of AI or chatbots.
'He's telling us not to use it, and then he's using it himself,' she said.
Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of AI as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000.
When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed AI detection services, despite concerns about their accuracy.
But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors' overreliance on AI and scrutinizing course materials for words ChatGPT tends to overuse, such as 'crucial' and 'delve.' In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.
For their part, professors said they used AI chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants.
Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18% described themselves as frequent users of generative AI tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The AI industry wants to help, and to profit: The startups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.
(The Times has sued OpenAI for copyright infringement for use of news content without permission.)
Generative AI is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain.
Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some 'really nice feedback' to give Marie.
'From my perspective, the professor didn't even read anything that I wrote,' said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use AI. Working at the school was a 'third job' for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher.
Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays but used ChatGPT as a guide, which the school permitted.
Robert MacAuslan, vice president of AI at Southern New Hampshire, said that the school believed 'in the power of AI to transform education' and that there were guidelines for both faculty and students to 'ensure that this technology enhances, rather than replaces, human creativity and oversight.' A do's and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, 'in place of authentic, human-centric feedback.'
'These tools should never be used to 'do the work' for them,' MacAuslan said. 'Rather, they can be looked at as enhancements to their already established processes.'
After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university.
Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. 'Not a big fan of that,' Shovlin said, after being told of Marie's experience. Shovlin is also an AI faculty fellow, whose role includes developing the right ways to incorporate AI into teaching and learning.
'The value that we add as instructors is the feedback that we're able to give students,' he said. 'It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.'
Shovlin is a proponent of incorporating AI into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and 'develop an ethical compass with AI,' he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences. 'If you screw up, you're going to be fired,' Shovlin said.
One example he uses in his own classes: In 2023, officials at Vanderbilt University's education school responded to a mass shooting at another university by sending an email to students calling for community cohesion. The message, which described promoting a 'culture of care' by 'building strong relationships with one another,' included a sentence at the end that revealed that ChatGPT had been used to write it. After students criticized the outsourcing of empathy to a machine, the officials involved temporarily stepped down.
Not all situations are so clear cut. Shovlin said it was tricky to come up with rules because reasonable AI use may vary depending on the subject. The Center for Teaching, Learning and Assessment, where he is a fellow, instead has 'principles' for AI integration, one of which eschews a 'one-size-fits-all approach.'
The Times contacted dozens of professors whose students had mentioned their AI use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn't always make sense. They used it to organize their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognize when it hallucinates, or gets facts wrong.
There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students' work; others decried the practice. Some emphasized the importance of transparency with students when deploying generative AI, while others said they didn't disclose its use because of students' skepticism about the technology.
Most, however, felt that Stapleton's experience at Northeastern — in which her professor appeared to use AI to generate class notes and slides — was perfectly fine. That was Shovlin's view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Shovlin compared it with a long-standing practice in academia of using content, such as lesson plans and case studies, from third-party publishers.
To say a professor is 'some kind of monster' for using AI to generate slides 'is, to me, ridiculous,' he said.
Shingirai Christopher Kwaramba, a business professor at Virginia Commonwealth University, described ChatGPT as a partner that saved time. Lesson plans that used to take days to develop now take hours, he said. He uses it, for example, to generate data sets for fictional chain stores, which students use in an exercise to understand various statistical concepts.
'I see it as the age of the calculator on steroids,' Kwaramba said.
Kwaramba said he now had more time for student office hours.
Other professors, including David Malan at Harvard University, said the use of AI meant fewer students were coming to office hours for remedial help. Malan, a computer science professor, has integrated a custom AI chatbot into a popular class he teaches on the fundamentals of computer programming. His hundreds of students can turn to it for help with their coding assignments.
Malan has had to tinker with the chatbot to hone its pedagogical approach, so that it offers only guidance and not the full answers. The majority of 500 students surveyed in 2023, the first year it was offered, said they found it helpful.
Rather than spend time on 'more mundane questions about introductory material' during office hours, he and his teaching assistants prioritize interactions with students at weekly lunches and hackathons — 'more memorable moments and experiences,' Malan said.
Katy Pearce, a communication professor at the University of Washington, developed a custom AI chatbot by training it on versions of old assignments that she had graded. It can now give students feedback on their writing that mimics her own at any time, day or night. It has been beneficial for students who are otherwise hesitant to ask for help, she said.
'Is there going to be a point in the foreseeable future that much of what graduate student teaching assistants do can be done by AI?' she said. 'Yeah, absolutely.'
What happens then to the pipeline of future professors who would come from the ranks of teaching assistants?
'It will absolutely be an issue,' Pearce said.
After filing her complaint at Northeastern, Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back.
Rick Arrowood, her professor, was contrite about the episode. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to ChatGPT, the AI search engine Perplexity and an AI presentation generator called Gamma to 'give them a fresh look.' At a glance, he said, the notes and presentations they had generated looked great.
'In hindsight, I wish I would have looked at it more closely,' he said.
He put the materials online for students to review, but emphasized that he did not use them in the classroom, because he prefers classes to be discussion-oriented. He realized the materials were flawed only when school officials questioned him about them.
The embarrassing situation made him realize, he said, that professors should approach AI with more caution and disclose to students when and how it is used. Northeastern issued a formal AI policy only recently; it requires attribution when AI systems are used and review of the output for 'accuracy and appropriateness.' A Northeastern spokesperson said the school 'embraces the use of artificial intelligence to enhance all aspects of its teaching, research and operations.'
'I'm all about teaching,' Arrowood said. 'If my experience can be something people can learn from, then, OK, that's my happy spot.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Mint
an hour ago
- Mint
OpenAI supercharges ChatGPT: New Canvas and Projects features unveiled: All details
OpenAI has introduced a range of enhancements to its ChatGPT platform, adding new functionality to both its Canvas and Projects tools. The latest update enables users to download content directly from Canvas in various file formats, while Project capabilities have been expanded to include advanced research tools, voice interactions, and improved flexibility on mobile. Announced via OpenAI's official X account, the update brings long-requested features to Canvas — a live editing and formatting space within ChatGPT, initially launched in preview form in October 2024 and later rolled out to all users by December. With this update, users can now export their work in several formats, including PDF, Microsoft Word (.docx), and Markdown (.md). For developers using Canvas to write code, direct export in language-specific formats such as Python (.py), JavaScript (.js), and SQL (.sql) is now possible. The Canvas feature is accessible across all platforms where ChatGPT is available — including the website, desktop clients, and mobile apps — allowing users to maintain a consistent workflow regardless of device. In parallel, OpenAI has announced substantial upgrades to the Projects tool, which is exclusive to ChatGPT's paid subscribers. Projects allows users to organise conversations and store contextual information, including previous chats, uploaded files, and personalised instructions. The new update significantly extends its scope. Most notably, Projects now supports 'deep research' capabilities. This allows users to make complex, multi-step research queries using real-time data from public web sources, improving the AI's performance in tasks that demand more than surface-level information. Additionally, the integration of ChatGPT's voice mode provides a hands-free experience, making it easier for users to interact with the assistant on the go. Further improvements are also being introduced on mobile. Users of the ChatGPT app on iOS and Android can now upload files and switch between AI models directly within a project — two features previously limited to the desktop experience.


India Today
an hour ago
- India Today
GitHub CEO says AI can help you start a company but not scale it
AI is fast becoming the right hand of many coders. Vibe coding is the latest buzzword — making things significantly easier for engineers by simply allowing them to prompt AI with what they want, rather than writing code line by line in the traditional manner. While AI is undoubtedly helping software engineers and even aspiring entrepreneurs turn their ideas into functioning startups, GitHub CEO Thomas Dohmke believes human expertise still plays a vital role when it comes to scaling a business. And it matters — a at Station F during VivaTech in Paris, Dohmke cautioned budding entrepreneurs that although AI coding assistants can help them lay the foundation of their business and lower the entry barrier, AI cannot replace the deep technical knowledge required to build sustainable and scalable companies. Dohmke noted that over the past two years, he has witnessed a shift in mindset — from companies needing convincing to adopt AI, to now experiencing 'a lot of FOMO' in the emphasised that while AI coding assistants are now enabling non-technical founders to build businesses with small teams and no external funding, the widespread use of so-called vibe coding is also making it harder for startups to stand out in front of investors. 'The investors would ask, 'Why would I invest in you instead of the 10 other people?'' he noted (via Business Insider). According to Dohmke, bootstrapped startups predominantly built using AI tools may not hold 'as much value' in the eyes of Dohmke argues that even though AI can kickstart product development, it cannot replace skilled developers who are essential for building and scaling complex systems. 'A non-technical founder will find it difficult to build a startup at scale without developers,' he said, 'because they can't build a complex system to justify the next round.''The value of your startup isn't defined by what you can develop using cheap measures,' he added. According to him, startups still need a deeper understanding of how their systems operate in order to attract serious investment and continue growing. At the same time, he cautioned developers to use AI wisely: 'If I figure out how to write a prompt for something I can do myself, it's a waste of time. It's about the prompting skills, but also knowing when not to use the prompt.'Meanwhile, GitHub — a platform with over 150 million users globally — was acquired by Microsoft in 2018 for $7.5 billion. The platform itself offers a range of AI tools. Its flagship tool, GitHub Copilot, is designed to assist developers by suggesting code snippets and offering explanations, thereby speeding up software development. But even with such tools at their disposal, Dohmke insists that foundational coding skills will remain very important in the age of AI.


Indian Express
an hour ago
- Indian Express
The rise of AI agents: What they can do, who is building them, and why it matters
At the beginning of the year, AI agents were widely tipped to be the defining breakthrough of 2025. Six months in, that prediction appears to be gradually taking shape as OpenAI, Anthropic, Microsoft, Google, and several other tech companies have launched products and features based on AI agents that are designed to autonomously complete tasks on the web under minimal supervision by humans. Besides building AI agents that go beyond text and image generators, several companies are also adopting agentic artificial intelligence (AI) to automate their workflows. A recent survey by Big Four accounting firm EY found that out of 500 tech industry executives in the US, nearly 48 per cent of them are already deploying AI agents within their companies. Half of the respondents said that more than 50 per cent of AI deployment will be autonomous in their company in the next two years, as per the report. As agentic AI continues to take momentum, here is what you need to know to help make sense of what is ahead. An AI chatbot like OpenAI's ChatGPT is used to generate text. It can communicate with users based on prompts submitted by users. These chatbots are powered by large language models (LLMs) that are trained on vast amounts of data to generate text and images. On the other hand, an AI agent is more flexible than a chatbot as it can interpret complex commands and trigger various actions on its own. The foundational LLMs will help decide what actions the AI agent should take. These actions depend on the type of AI agent as well. While a web browsing agent can handle tasks such as searching the internet, booking travel tickets, and making online purchases, a coding agent is designed to navigate codebases, retrieve snippets of code, and even generate or debug code. These agents are developed in a way that they do not need human input at every step. While a human can input a command or prompt and step back, they will likely need to stay in the loop and monitor the AI agent's actions in order to be able to intervene if needed. Tech companies developing AI agents claim that they will enable humans to be hyper-productive. Companies ranging from big tech giants such as Nvidia, Microsoft, Amazon, and Google to well-funded startups like OpenAI and Anthropic are focused on building AI agents. Earlier this month, Amazon announced that it is creating a new unit within its hardware research-and-development unit that will focus on developing an agentic AI framework to be integrated into the e-commerce major's robots or physical AI systems. The Jeff Bezos-founded company is expected to have more of an edge in developing AI agents because it has access to specialised data that can be used to train these agents in how humans navigate and shop on Amazon's website. Startups like Cursor are also developing AI coding agents. In fact, over 70 startups incubated by startup accelerator Y Combinator as part of its spring 2025 batch are reportedly focused on agentic AI. The rise of agentic AI has made its way to India as well. Ola founder Bhavish Aggarwal's AI venture, Krutim, recently launched a new agentic AI app called Kruti which is capable of autonomously booking cabs and ordering food on the Ola platform. It further has plans to enable ride-hailing and food delivery bookings on rival platforms such as Uber, Zomato, Swiggy, etc. One of the biggest use cases for AI agents is said to be customer service. According to a report by market research firm Gartner, over 80 per cent of common customer service queries will be resolved by AI agents in the next four years. Currently, AI agents being deployed by tech companies are capable of surfing the web, calling a restaurant to make reservations, or fulfilling routine tasks in a Microsoft Office environment. AI agents also have immense potential in the field of software development which has, in turn, sparked fears that they could lead to the elimination of coding-related jobs. They could also handle a lot of repetitive back-office work such as filing invoices. As AI moves toward a major platform shift into hardware, it would be interesting to see the role that AI agents play when integrated with physical devices. The idea that AI agents could potentially end up automating a lot of mundane chores may also bring us one step closer to achieving artificial general intelligence (AGI). Despite their advanced abilities, AI agents are just as prone to hallucinations and misaligned unpredictable behaviour as chatbots since they are essentially powered by the same LLM technology. The high cost of computing power needed for AI agents to operate autonomously is another drawback. For instance, OpenAI could charge up to $20,000 per month for access to its specialised AI agents that will supposedly be able to perform tasks with the same level of expertise as a PhD graduate, according to a report by The Information. Given their autonomy to take actions within a user's system, AI agents also pose a new security risk. If compromised, AI agents could be used by attackers to steal information and carry out other malicious activities.