logo
College Students Want Their Money Back After Professor Caught Using ChatGPT

College Students Want Their Money Back After Professor Caught Using ChatGPT

Newsweek16-05-2025

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources.
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
A student at Northeastern University has called for her tuition fees to be refunded after she discovered that one of her professors was using ChatGPT to respond to her work.
The professor asked the chatbot to create some "really nice feedback" for the student, despite many in the education sector calling on students to stop using artificial intelligence for work, according to a report from The New York Times.
Why It Matters
As artificial intelligence becomes more and more prevalent in the education system, the double standard in AI use between faculty and students is being challenged. Normally, it's the students who are being criticized for using generative AI on assignments, but this latest incident has shown that professors are not infallible either.
What To Know
In February, Ella Stapleton, a senior at Northeastern University's business school, noticed that her assignment notes from her professor appeared to include direct queries from a conversation with ChatGPT.
One prompt in the notes read, "expand on all areas. Be more detailed and specific," followed by descriptions and bullet points typical of AI-generated text, according to The New York Times.
Other class materials included distorted images, misspelled text, and other prompts, all of which are clear signs of AI usage.
However, Stapleton's business major explicitly ruled out the use of unauthorized AI and other "academically dishonest activities," leading Stapleton to file a formal complaint against the professor.
Weber Arch and University Hall at Northwestern University in Evanston, Illinois, April 2016.
Weber Arch and University Hall at Northwestern University in Evanston, Illinois, April 2016.
Getty Images
It's not the first time AI has had growing pains when introduced to the education system. A report from January this year revealed that almost 90 percent of academics believe the majority of their students use AI regularly, with generative AI being the most common.
C. Edward Watson, vice president for digital innovation at the American Association of Colleges and Universities, described the breakthroughs in Large Language Models (LLMS), which includes generative interfaces like ChatGPT, as an "inflection point" in U.S. education, warning: "The challenge now is turning today's disruption into tomorrow's innovation in teaching and learning"
What People Are Saying
Lee Rainie, director of Elon University's Imagining the Digital Future Center, said in a report on academic reactions to the use of AI: "The overall takeaway from these leaders is that they are working to make sense of the changes they confront and looking over the horizon at a new AI-infused world they think will be better for almost everyone in higher education.
"They clearly feel some urgency to effect change, and they hope the grand reward is revitalized institutions that serve their students and civilization well."
What Happens Next
Academic institutions are still deciding on the best way to approach AI being used by both students and staff, while the technology itself continues to reach new developments.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I'm using Gemini now for my Gmail and there's one major discovery that's surprising
I'm using Gemini now for my Gmail and there's one major discovery that's surprising

Tom's Guide

time20 minutes ago

  • Tom's Guide

I'm using Gemini now for my Gmail and there's one major discovery that's surprising

Not everything Google does or touches turns to gold. Case in point: The new Gemini AI addition to Gmail is not all it's cracked up to be. While it works amazingly well when it comes to composing emails and summarizing a thread, the great irony is that most search-related prompts are not even remotely helpful. (Note: I asked Google reps about my test results and they have not responded.) I noticed the Gemini icon for the first time just about one week ago and started diving in right away. The AI bot is starting to roll out for many users with an update that includes new search functions, enhanced smart replies, and a few inbox clean up prompts. Before I cover what didn't work for me, let me just say: I can see where this is all heading and I'm mostly pleased with the basic functions, like smart replies and summaries. I'm used to AI providing some basic help with my email replies, since I've used ChatGPT many times to help me compose and revise emails. Gemini does an exemplary job. When you want help, you can open a sidebar and enter prompts. On my phone or in the browser, I could also ask Gemini to 'polish' my own email, adding more details and context in seconds.I also really liked the summaries. At the top of the screen, there's a button called 'Summarize this email' and the little star icon for Gemini. You'll see a summary with action steps, and in all of my testing, Gemini was accurate and helpful. I found I didn't have to read back on a thread as much and used Gemini to catch me up on the conversation. I wasn't here for the smart replies and summaries, though. I've been able to do that with other AI bots for the last three years. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. I want an AI that goes much, much further than that with my email — e.g tools for helping me understand not just one email thread. I have around 650,000 emails in my Gmail and it's a treasure trove that Gemini could easily explore. I wanted to be able to find out who emailed me the most in one particular month, which topics I discussed most often this year, and create a mass email to let people I interact with the most know that I will be out a couple days in June. Unfortunately, Gemini seems woefully inadequate and returns incorrect results. When I asked the bot to find the people I emailed the most this year and also in May, the results were not correct. Gemini only listed two people and I had barely interacted with them. It's possible Gemini just found the most recent interactions, but I had asked for results from 2025 and all of I asked Gemini about topics I had discussed most often, the AI was blissfully unaware of which emails were just spam sent to me. My prompt was 'Which topics did I discuss and reply to the most in 2025' and Gemini listed a bunch of email newsletters. That was an error, because Gemini was only looking at emails sent to me the most, not those where I interacted.I also asked Gemini to compose an email to the people I interact with the most, explaining that I will be out June 5-6. Once again, Gemini only found the people that emailed me the most. While the email the bot composed was helpful, what I wanted was the bot to do the heavy lifting — compose an email with each person in a blind copy. I just wanted to click send. Gemini is also supposed to help with inbox cleanup duties, but this was mostly a miss. I asked Gemini for Gmail on my iPhone to look for old emails with large attachments and the bot showed me every email with an attachment, not the ones with the biggest attachments. And, they were not old emails -- they were all from the current month. I also asked Gemini to show me the emails with the largest attachments. For some reason, that prompt didn't work. 'I can't help with that' was the response. This prompt did work, though: 'Show me all emails with an attachment from May 2024.' I was able to then delete all of those messages quickly, which was helpful. The problem is that Gemini seemed to work about 25% of the time when I was trying to clean up my inbox. It is hit or miss. I really wanted the bot to understand my goals. Inbox clean up is fine, although anyone who has used Gmail for a while knows we've been able to tame our inbox using searches for many years. For example, I can type 'larger:5M after:2024/05/24 before:2025/05/25' to find files with attachments over 5MB this last year. There's also a filter to help guide you through that process. Instead, I wanted Gemini to be more like a smart assistant. More than anything, Gemini seemed to only search recent emails. In one query, I asked which emails seem urgent, and the bot only mentioned two from the last week. I asked which emails had a shipping label attached and the bot only found four, even though there are several dozen from the last two months. Gemini in Gmail is in more of a testing phase. Google is adding new features and enhancing the AI as time goes on, likely based on feedback or date they collect. For now, the AI is not really worth it for me, since the results are so unpredictable or outright incorrect.I expect the technology will improve, but I'll probably be leery of diving in again until it becomes obvious that Gemini will work as expected. I want the bot to make me more productive and to work reliably every time I type in a prompt. We're obviously not there yet.

WWDC 2025 is make or break for Apple Intelligence — here's why
WWDC 2025 is make or break for Apple Intelligence — here's why

Tom's Guide

timean hour ago

  • Tom's Guide

WWDC 2025 is make or break for Apple Intelligence — here's why

WWDC 2025 is going to be a big deal for Apple users. Not only are we expecting to see a big redesign for iOS 19/ (or iOS 26), but it also marks one year since Apple went all in on AI and announced Apple Intelligence. Of course Apple Intelligence hasn't really been the resounding success that Apple probably hoped. It's not been a disaster, but WWDC 2024 turned out to be the one thing Apple typically tries to avoid doing — overpromising and underdelivering. Nearly a year later, many of the promised Siri features are still missing in action. Considering Apple was already late to the party with AI, and the troubles it's had, the pressure is on at WWDC 2025. It's make or break, and if Apple doesn't ease the biggest concerns about Apple Intelligence then it risks it ending up like Siri did 10 years ago. The biggest issue with Apple Intelligence is that Apple realized AI was going to be a big deal much later than everyone else. Apple wasn't ignoring AI, but in the years before ChatGPT exploded in popularity, the company wasn't that interested in investing large amounts of money into AI development — especially with no clear end goal. According to a report from Bloomberg, it wasn't until after ChatGPT arrived that Apple's software chief Craig Federighi used generative AI for himself and realized how useful a tool generative AI could be. But by that point Apple was seriously far behind its rivals, and wouldn't be able to catch up easily. This is apparently where the main problems with Siri come in, since Apple attempted to catch up by tacking the new LLM-powered Siri onto the older voice assistant. This hasn't worked out, not only because of the delays but also because it apparently caused a bunch of problems that have been described as "whack-a-mole." All that inevitably made the controversial rollout of Apple Intelligence even more problematic. Not because the features that were released were bad, though things like news summaries proved too problematic to keep around. Apple Intelligence itself didn't land until iOS 18.1 arrived in late October, a month after iOS 18 and the iPhone 16 were released. iOS 18.2 was where the real improvements came into play, and that didn't arrive until late December. iOS 18.3 and 18.4 landed throughout the first few months of 2025, but by that point the number of useful new features had dropped dramatically. The problem wasn't the state of Apple Intelligence, though, and more of how Apple handled it. Simply put, it looked like Apple didn't want to be seen lagging behind its rivals, then overestimated what it could accomplish. WWDC is where Apple tells us what's going on with all its software, and it would be a mistake not to give Apple Intelligence the attention it needs. This is the first anniversary of its reveal, and despite all the problems Apple can't afford to be seen ignoring it. I'm not saying that WWDC needs to be an all-Apple Intelligence show. Google I/O did that, and it was far too much AI for any normal person to handle. But that doesn't mean Apple can brush AI to the wayside and treat it like Siri was treated for so many years. If that happens, Apple might as well be throwing in the towel on the AI race. We all know that the company is behind the likes of Google and OpenAI, but that doesn't mean the company's AI ambitions are dead. There's plenty of time to improve, and potentially catch up. In a best-case scenario Apple would admit that it dropped the ball with Apple Intelligence, and pledges to do better going forward. I don't see that happening. Apple is not known for willingly admitting its mistakes. But I also don't see Apple spending a great deal of time on AI either. Not just because it has a bunch of major design revamps to get us through in a keynote that can only be so long. But also because I'm sure Apple doesn't want to risk making the same mistakes as last year. No doubt we'll be hearing a lot of impressive specs about Apple Intelligence and its adoption, and maybe some reveals on different smaller features that may be on the way. And that should be enough. AI isn't the focus of this year's releases based on what we've heard, and it shouldn't dominate the show. But it does still need attention and improvements so it can continue to grow. Apple has already made plenty of mistakes with AI, from jumping on the bandwagon late to screwing up the launch of the features when they were ready. So it's imperative that the company get itself into gear, and come up with an adequate strategy for future updates and AI features. WWDC is going to be the starting point for all of that, and the attention Apple Intelligence gets at the show is going to lay the groundwork for the next few years of Apple AI rollouts. And while we can't expect Apple to roll out another wave of announcements like the ones we saw last year, it needs to avoid ignoring the topic completely. Otherwise, if AI is just going to get tossed to the side because of some early hurdles, then Apple probably shouldn't have bothered investing in it in the first place.

ChatGPT is making us weird
ChatGPT is making us weird

Yahoo

time2 hours ago

  • Yahoo

ChatGPT is making us weird

Artificial intelligence chatbots like OpenAI's ChatGPT models are changing the way we think. Beyond our workflows and creative processes, the bots are disrupting our social cues and intimate lives. Business Insider spoke to a range of professionals to ask: Is ChatGPT making us weird? The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary. My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human." Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage. And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.) But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange. As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it. Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives. The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today. Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives. While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents. Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something." Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings." Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf. The jury's still out on situations like these. "AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about." Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said. After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite. "This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us." Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent." Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures. "It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said. So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present. While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases. A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions. Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men. While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet. "There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on." Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be. OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended." The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior. When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being. OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness. An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which is analyzing Claude usage, how AI is being used across jobs, and studying what values AI models have. Representatives for Meta and Microsoft did not respond to requests for comment. Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders. Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist. "Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way. "Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are." Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure." "But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know." While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online. "This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?" Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits. While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools. "We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people." Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store