
Northeastern University student demands tuition refund after discovering professor used ChatGPT
Representative image (Picture credit: AP)
A student at
Northeastern University
demanded a tuition refund after discovering that her professor had used AI tools like ChatGPT to create class materials, despite a course policy banning unauthorised use of such technology.
Ella Stapleton, a senior at the time, was reviewing lecture notes for her organisational behaviour class when she noticed an instruction addressed to ChatGPT in the document. According to The New York Times, the material included phrases like 'expand on all areas' and showed common signs of AI-generated content, such as awkward phrasing, distorted images, and even typos that mirrored machine output.
'He's telling us not to use it, and then he's using it himself,' Stapleton was quoted as saying The Times.
She lodged a formal complaint with the university's business school, citing her professor's undisclosed use of AI and other concerns with his teaching. She requested a refund of over $8,000, the cost of the course.
The professor, Rick Arrowood, later acknowledged that he had used ChatGPT, Perplexity AI, and the AI presentation tool Gamma to refresh his materials. 'In hindsight… I wish I would have looked at it more closely,' Arrowood said.
He also admitted that the AI-generated content was flawed and noted he hadn't used those materials in class discussions, which were held in person.
After a series of meetings, Northeastern rejected Stapleton's refund request.
A spokesperson for the university was quoted by the Fortune saying that Northeastern 'embraces the use of artificial intelligence to enhance all aspects of its teaching, research, and operations,' adding that it enforces policies requiring attribution and accuracy checks when using AI-generated content.
This incident has become part of a wider debate in higher education, where students are increasingly criticising professors for using AI tools. While many universities restrict students from using ChatGPT and similar tools in coursework, professors are now facing scrutiny for doing the same. Some students argue that they are paying to be taught by humans, not algorithms they could access for free.
Stapleton's complaint is not isolated.
Other students, like one at Southern New Hampshire University, found their professor had used ChatGPT to grade essays and generate feedback, causing the student to feel 'wronged' and later transfer schools, The Times reported.
Paul Shovlin, an English professor at Ohio University, acknowledged students' frustrations but said using AI to draft slides or notes was comparable to relying on published teaching aids. Still, he emphasised the need for transparency, 'It's the human connections that we forge with students... that add value,' he said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
an hour ago
- NDTV
Watch: Swiss Researchers Develop AI-Powered Robot That Plays Badminton Against Humans
Quick Read Summary is AI generated, newsroom reviewed. Researchers at ETH Zurich created an AI-powered robot that plays badminton against humans. ANYmal-D is a four-legged robot equipped with a dynamic arm and stereo camera for gameplay. Using reinforcement learning, the robot effectively predicts shuttlecock trajectories and returns shots. Researchers at a Switzerland university have developed an AI-powered legged robot that plays badminton against humans with impressive agility. Researchers at ETH Zurich tested their AI controller on ANYmal-D, a four-legged robot equipped with a stereo camera and a dynamic arm holding a badminton racket, Independent reported. Using reinforcement learning, the robot accurately tracks the shuttlecock's flight, predicts its trajectory, and navigates the court to intercept and return shots. The robot learns by trial and error to make better decisions. The main adaptation was giving it four legs instead of the two used by humans. The configuration gives the robot much more stability and flexibility in movements. Watch a robot play badminton against human players. Learn more: @NewsfromScience — Science Magazine (@ScienceMagazine) June 3, 2025 According to researcher Yuntao Ma, this project highlights AI's potential for enabling legged robots to perform complex tasks, potentially leading to advancements in autonomous and intelligent systems, including humanoids. "We introduced a perception noise model that maps the robot's motion to perception quality. And this allowed the reinforcement learning algorithm to automatically balance between the robot's agile motion and a reliable perception,'' said Mr Ma. "The control algorithm also generalises to other robot platforms such as humanoids and also other tasks such as search and rescue, and home services,'' he added. What happened in the test game? The robot designed for badminton was tested against human players, successfully returning shots at various speeds and angles, achieving rallies of up to 10 hits. Using reinforcement learning, the robot tracked shuttlecocks at speeds of 12.06 m/s, adjusting its gait and balance. The robot was also able to rise on its hind legs for better visibility while prioritising balance. However, it struggled with fast, aggressive shots like smashes due to hardware limitations in camera perception and actuator speed, with a 0.375-second delay in response. Future improvements in perception responsiveness are needed for competitive full-court play. "A key advantage of our approach is that the controller is trained end-to-end—upper and lower limbs are optimised together from the beginning. There is no architectural distinction between coordinating the arm with the legs and coordinating the left and right legs. As a result, the limbs learn to compensate for each other's dynamics naturally during training, leading to coordinated whole-body motion", he told Interesting Engineering (IE).


Time of India
an hour ago
- Time of India
Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?
HighlightsThe rise of artificial intelligence is reshaping cognitive processes in various fields, prompting concerns about the potential loss of originality and depth in creative work as reliance on AI tools increases. Generative AI, while capable of producing competent-sounding content, often lacks true creativity and originality, as it predominantly reflects and rearranges existing human-created material. The challenge posed by the cognitive revolution driven by artificial intelligence is not only technological but also cultural, as it raises questions about preserving the irreplaceable value of human creativity amid a surge of algorithmically generated content. Artificial Intelligence began as a quest to simulate the human brain . Is it now in the process of transforming the human brain's role in daily life? The Industrial Revolution diminished the need for manual labour. As someone who researches the application of AI in international business , I can't help but wonder whether it is spurring a cognitive revolution , obviating the need for certain cognitive processes as it reshapes how students, workers and artists write, design and decide. Graphic designers use AI to quickly create a slate of potential logos for their clients. Marketers test how AI-generated customer profiles will respond to ad campaigns. Software engineers deploy AI coding assistants. Students wield AI to draft essays in record time - and teachers use similar tools to provide feedback. The economic and cultural implications are profound. What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills? And how can human creativity and critical thinking be preserved in an age of algorithmic abundance? Echoes of the industrial revolution We've been here before. The Industrial Revolution replaced artisanal craftsmanship with mechanised production, enabling goods to be replicated and manufactured on a mass scale. Shoes, cars and crops could be produced efficiently and uniformly. But products also became more bland, predictable and stripped of individuality. Craftsmanship retreated to the margins, as a luxury or a form of resistance. Today, there's a similar risk with the automation of thought. Generative AI tempts users to conflate speed with quality, productivity with originality. The danger is not that AI will fail us, but that people will accept the mediocrity of its outputs as the norm. When everything is fast, frictionless and "good enough," there's the risk of losing the depth, nuance and intellectual richness that define exceptional human work. The rise of algorithmic mediocrity Despite the name, AI doesn't actually think. Tools such as ChatGPT, Claude and Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they have processed. They are, in essence, mirrors that reflect collective human creative output back to users - rearranged and recombined, but fundamentally derivative. And this, in many ways, is precisely why they work so well. Consider the countless emails people write, the slide decks strategy consultants prepare and the advertisements that suffuse social media feeds. Much of this content follows predictable patterns and established formulas. It has been there before, in one form or the other. Generative AI excels at producing competent-sounding content - lists, summaries, press releases, advertisements - that bears the signs of human creation without that spark of ingenuity. It thrives in contexts where the demand for originality is low and when "good enough" is, well, good enough. When AI sparks - and stifles - creativity Yet, even in a world of formulaic content, AI can be surprisingly helpful. In one set of experiments, researchers tasked people with completing various creative challenges. They found that those who used generative AI produced ideas that were, on average, more creative, outperforming participants who used web searches or no aids at all. In other words, AI can, in fact, elevate baseline creative performance. However, further analysis revealed a critical trade-off: Reliance on AI systems for brainstorming significantly reduced the diversity of ideas produced, which is a crucial element for creative breakthroughs. The systems tend to converge toward a predictable middle rather than exploring unconventional possibilities at the edges. I wasn't surprised by these findings. My students and I have found that the outputs of generative AI systems are most closely aligned with the values and world views of wealthy, English-speaking nations. This inherent bias quite naturally constrains the diversity of ideas these systems can generate. More troubling still, brief interactions with AI systems can subtly reshape how people approach problems and imagine solutions. One set of experiments tasked participants with making medical diagnoses with the help of AI. However, the researchers designed the experiment so that AI would give some participants flawed suggestions. Even after those participants stopped using the AI tool, they tended to unconsciously adopt those biases and make errors in their own decisions. What begins as a convenient shortcut risks becoming a self-reinforcing loop of diminishing originality - not because these tools produce objectively poor content, but because they quietly narrow the bandwidth of human creativity itself. Navigating the cognitive revolution True creativity, innovation and research are not just probabilistic recombinations of past data. They require conceptual leaps, cross-disciplinary thinking and real-world experience. These are qualities AI cannot replicate. It cannot invent the future. It can only remix the past. What AI generates may satisfy a short-term need: a quick summary, a plausible design, a passable script. But it rarely transforms, and genuine originality risks being drowned in a sea of algorithmic sameness. The challenge, then, isn't just technological. It's cultural. How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content? The historical parallel with industrialisation offers both caution and hope. Mechanisation displaced many workers but also gave rise to new forms of labour, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs. This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention. Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence? The answer, for now, is up in the air.


Time of India
2 hours ago
- Time of India
Yuval Noah Harari on risks of intimate relationships with AI: 'There could be unknown benefits, but for now…'
Israeli historian and author Yuval Noah Harari has warned that artificial intelligence's (AI) ability to replicate intimacy can alter human relationships. The author, known for his bestselling books Sapiens and Nexus, said that, "there could be unknown benefits, but for now, the risks far outweigh them." Tired of too many ads? go ad free now Harari suggested that while fears about job automation dominate headlines, AI's impact on intimate relationships presents a more unsettling prospect. In a recent post shared on Instagram, the author added a video clip of his discussion which was co-hosted by the Deutsches Institut für Japanstudien , Tokyo College, and Kawade Shobo. During this interview, Harari explained that AI, having already mastered language and attention, is now progressing towards mimicking intimacy, which he considers a potent human connection. What author Yuval Noah Harari said about intimate relationships with AI Captioning the Instagram post, Harari wrote: 'Wouldn't you want a partner that always gives you 100% attention and is never upset? We are facing a world where a new generation grows up with intimate relationships with AI . There could be unknown benefits – but for now, the risks far outweigh them.' In the video, he can be seen saying: 'AI can replicate intimacy the same way that it masters language and the same way that it previously mastered attention. The Next Frontier is intimacy, and it is much more powerful than attention. If you want to change somebody's views on politics to sell a product, anything intimacy is the most powerful tool to do that. A good friend can change your views in a way that no number of articles in a newspaper or a book can do until today. Nothing could fake intimacy, and it was impossible in particular to mass produce intimacy.' Take a look at the Instagram post 'Now, what happens if the new generation grows up? And develops intimate relations with AIs instead of with other human beings. Tired of too many ads? go ad free now Again, we don't know. It could have some benefits, but the potential danger is enormous. That people will become attached to the fake people, and in the process, we lose the ability to create intimacy with real human beings. Because really, human beings are much more problematic than AIs that want to become your intimate friend,' Harari continued. 'Its greatest advantage is that it has no feelings of its own. It's never upset. It's never angry. It's never tired. It can focus on you 100%. Understand exactly how you feel and create a fake sense of intimacy,' the author, who is known for exploring human evolution, technology, consciousness, and AI's impact on our future further noted.