Latest news with #academicIntegrity


CBC
4 hours ago
- CBC
Canadian universities grapple with evaluating students amid AI cheating fears
Social Sharing Canada's post-secondary institutions are looking for new ways to assess students as they respond to fears about AI being used to cheat on exams. During the COVID-19 pandemic, most university exams were moved online. Then came generative AI tools like ChatGPT, capable of producing essays and answering complex questions in seconds. In the U.S., reports of rampant AI cheating led to an explosion in sales of "blue books" used for old-fashioned pen-and-paper exams this school year. In Canada, some professors are making a similar move amid widespread reports of AI cheating, while others are testing out oral exams or finding ways to incorporate AI. Six in 10 Canadian students said they use generative AI for their schoolwork, according to an October 2024 study from KPMG in Canada. "We are definitely in a moment of transition with a lot of our assessments," said Karsten Mundel, co-chair of the University of Alberta's AI Steering Committee. Don't boil AI down to cheating tool: prof Mundel speaks with his students about his expectations around AI. If they use it for brainstorming, he asks them to explain their process and the prompts they used so he can see how they led to the final product. He takes an optimistic view of this new challenge, saying AI has reinvigorated conversations about what academic integrity means in the current day. "I get worried when AI in any educational context gets boiled down to this tool of cheating," he said. "I think it's an exciting time right now because of the transformations that it will bring, and to really help us get at the core of what skills we're trying to teach." At his school, Mundel says, there's an increase in handwritten exams, as well as new approaches that incorporate oral exams and assignments that use AI and then have students reflect on their AI use. He says going back to pen and paper isn't necessarily the best solution, and acknowledges some students have complained about the change. "We don't have the skills anymore — universally, at least — to hand-write long-form things. And so that's a learning curve for our students, and for the instructors who have to read." Many post-secondary students today have grown up working primarily on electronic devices and don't have as much experience with writing by hand in university. For example in Ontario, learning cursive in elementary school was made optional in 2006, though the provincial government made it mandatory again in recent years. Katie Tamsett, vice-president, academic, of the U of A's student union, says concerns of cheating using AI have to be balanced with the fact that the technology is being used in the real world. "As students, we're seeing that in the workforce, AI is being used. And so when we're doing our courses in university, we want to be seeing that AI is being incorporated as a tool." Tamsett says the student union is in ongoing conversations with the university about how to develop best practices around AI. Student says schools can be 'overly reactionary' University of Toronto Students' Union president Melani Vevecka says her experience with pen-and-paper exams has been largely positive, but says they can be a barrier for students with anxiety or learning disabilities. "Part of the challenge to accommodate everybody is figuring out what kind of assessments will hold value in a world where students can probably generate a decent essay within a few minutes," she said. Vevecka understands the pitfalls of relying on AI, and says she knows some students have used it to cheat. But she also says it's been helpful in her studies, like, for example, generating practice questions ahead of a final exam. She feels universities' responses to it have in some cases been "overly reactionary." What Vevecka would like to see is more of a focus on clarity and education around AI, "rather than vague restrictions or punitive suspicion, which is kind of something that most academics are trying to do." "I think that universities should be creating academic cultures where students are empowered to think critically about the tools that they use, and where trust is preserved through transparency and not just surveillance." WATCH | Canadian universities grapple with AI: Universities grapple with making AI a teaching vs. cheating tool 2 years ago Duration 2:06 A teaching tool or a cheating device? Universities and colleges are trying to figure out what role artificial intelligence has in the classroom. In-person exams 'fear-based,' says BCIT administrator Jennifer Figner, provost and vice-president, academic, at the British Columbia Institute of Technology, says the move to in-person exams is a trend, but one she views as being "fear-based" — and a route her school is encouraging professors not to take. "What really we should be doing is challenging ourselves to figure out, how do you incorporate AI into testing or into assessment, rather than trying to work around it by going back to pencils and paper and stuff that we did in 1970?" she said. On the other hand, Figner says, the pandemic coinciding with generative AI created an environment where cheating became so easy that not doing it could put students at a disadvantage. Software that detects AI cheating is imperfect, so she also worries about students being wrongfully penalized. And oral exams can be "far more labour-intensive and time-consuming" than having all students take an exam at once. Figner says AI is ultimately going to force the entire education sector to "totally revamp" the way students are assessed and evaluated. Existential questions for universities Christina Hendricks, academic director at the University of British Columbia's Centre for Teaching, Learning and Technology, does handwritten exams for finals in her philosophy classes. But some UBC professors are sticking to computers, doing in-class exams with supervision to deter cheating. Some are done in a lab where the only thing students can access is the exam, and the rest of the computer is locked. In some disciplines, she's heard of instructors assigning infographics, slides or videos to get around AI — but now all those things are also easily done with AI tools. Her centre helps instructors take small steps to change their assessment setups over time. In the long term, Hendricks agrees that universities will have to completely overhaul their assessment strategies. "I think that there's going to be these reflective, existential questions for some faculty," she said.


South China Morning Post
a day ago
- Science
- South China Morning Post
AI content detector: why does China dismiss it as ‘superstition tech'?
With the graduation season approaching, many Chinese universities have introduced regulations setting clear requirements for the proportion of artificial intelligence -generated content – or the 'AI rate', as it is called – in theses. Advertisement Some universities have used the AI rate as a deciding factor in whether a thesis is approved. The rule is intended to prevent academic misconduct, as educators have become increasingly concerned about the unregulated use of AI in producing scholarly literature, including data falsification and content fabrication, since the public debut of generative AI models such as ChatGPT However, an official publication of the Ministry of Science and Technology has warned that using AI content detectors to identify AI writing is essentially a form of 'technological superstition' that could cause many unintended side effects. AI detection tools could produce false results, the Science and Technology Daily said in an editorial last Tuesday, adding that some graduates had complained that content clearly written by them was labelled as AI-generated. Advertisement Even a very famous Chinese essay written 100 years ago was evaluated as more than 60 per cent AI-generated, when analysed by these tools, the article said.


Forbes
31-05-2025
- Health
- Forbes
Understanding How Students Use AI and What Faculty Can Do About It
Nearly every day, I see an op-ed or social media post about students' use of AI, most written by faculty. The use of AI in the classroom is controversial among faculty, with some embracing it and finding ways to incorporate it into classroom assignments, others expressing anger about students using it to write papers, and still others being uncertain about what to do. A new survey of 1000 students by Kahoot! – Study Habits Snapshot – shows some interesting patterns. Some of the most prominent include: To better understand the implications of the survey results, I talked with Liz Crawford, Director of Education at Kahoot! I was curious about her interpretation of the finding that 70% of students already use AI in their academic work, especially regarding what that means for faculty, teaching, and assessment of learning. Crawford explained, 'We're entering a new era where AI isn't just a tool—it's becoming a learning partner. Today's students use AI to work more efficiently, personalize their learning, and deepen their understanding. From summarizing notes in seconds using a phone camera to generating self-quizzes before an exam, students are proactively using AI to support—not shortcut—their academic growth.' Liz Crawford, Director, Education at Kahoot! Kahoot She advised faculty: 'It's critical to move beyond the assumption that AI use is synonymous with cheating.' Crawford believes academic integrity is vital, and that 'many students use AI responsibly to enhance their learning, spark new ideas, and strengthen their critical thinking.' She believes that faculty need to realize that 'AI is no longer a future trend—it's already embedded in how students learn.' From her perspective, this growing reliance on AI isn't something to fear, but instead, a call to action. Crawford shared, 'If we don't evolve our teaching and assessment strategies, we risk creating a disconnect between how students are learning and how we're guiding them. Thoughtful integration of AI allows educators to model digital responsibility, engage students more meaningfully, and ensure that learning environments remain relevant and future-ready.' To further explore how these changes might play out in the classroom, I asked Crawford about a particularly concerning part of the Kahoot! survey – students appreciated AI's instant feedback over that of peer study groups. I asked Crawford how this finding might influence faculty design of formative assessments and student support systems. She noted that the demand for immediate AI feedback shows a shift in student expectations and needs and presents an opportunity for faculty. More specifically, she stated, 'To begin with, integrating AI-powered tools into assessment strategies can be a game-changer for faculty.' She emphasized that tools like those provided by Kahoot! and similar organizations can provide real-time feedback, potentially empowering students to identify and correct their misunderstandings promptly. Crawford and others conducting research in the area believe 'this type of approach improves comprehension but also keeps students engaged and motivated throughout the learning process.' Another key benefit of AI integration, according to Crawford, is the potential for personalization. She stated, 'By analyzing performance data, AI systems can offer tailored feedback that addresses each student's unique challenges and needs. This attention can lead to better learning outcomes and heightened student enthusiasm for their studies.' However, Crawford cautioned, 'While AI feedback is incredibly useful, it's essential to remember that it should complement, not substitute, human connections.' The survey also revealed a troubling trend that faculty cannot ignore. Forty Percent (40%) of students surveyed reported skipping exams due to fear of failure. I asked Crawford if there was anything that AI could do to stop this fear and improve confidence among students. She shared, 'Academic anxiety often stems from uncertainty as students aren't sure how to prepare, whether they're studying the right material, or fear of failure.' Crawford noted how AI can help, stating: 'This is where responsible AI integration can make a real difference. AI offers a consistent, on-demand support system that students can rely on throughout their learning journey.' Knowing this, Kahoot!, is beginning to combine AI with gamification – adding gamelike elements to AI interactions. Crawford shared that students can use AI to scan notes and turn them into personalized quizzes using their phones, and they can do this anywhere. She noted, 'Whether they're commuting, studying between classes, or reviewing before bed, students can actively engage in low-pressure practice that builds mastery over time.' Of course, with so much innovation, it's easy to understand why many faculty feel overwhelmed, even if they want to incorporate AI-based learning in their courses. I asked Crawford how faculty can take the steps. She explained, 'Start small, stay curious, and utilize trusted tools. You don't need to become an AI expert overnight.' She added, 'I recommend that faculty members leverage AI to tackle tasks that help them be more efficient, such as preparing for their classes, designing formative assessments, and analyzing reports by exploring the capabilities of different platforms.' One of the most important pieces of advice that Crawford shared for faculty is, 'It's important to recognize that your students can be partners in this journey. Invite their input, explore AI together, and use these conversations to teach digital responsibility.' She wants to remind faculty that their role as 'a guide, mentor, and critical thinker is more essential than ever in an AI-driven world.' From my vantage point as a faculty member, I don't think we can afford to ignore how quickly AI is shaping the way students learn. Rather than shutting the door on AI out of frustration, we have an exciting opportunity to design learning environments and assignments that are creative, rigorous, and engage with AI in positive ways. As Crawford reminds us, we need to work with students to be digitally responsible and critical consumers of AI-generated information.


New York Times
27-05-2025
- Politics
- New York Times
Harvard Professor Who Studied Honesty Loses Tenure Amid Accusations of Falsifying Data
A Harvard professor who has written extensively about honesty was stripped of her tenure this month, a university spokesman said on Tuesday, after allegations that she had falsified data. The scholar, Francesca Gino, a professor of business administration at Harvard Business School and a prominent behavioral scientist, has studied how small changes can influence behavior and been published in a number of peer-reviewed journals. Among the studies in which Dr. Gino has been a co-author are, for example, one showing that counting to 10 before deciding what to eat can lead to choosing healthier food. In 2021 and 2023, Dr. Gino was accused by other professors on a blog site of falsifying data in academic papers. Harvard told Dr. Gino that it had received allegations that she manipulated data in four papers. She has broadly denied the claims. Many of Dr. Gino's papers were influential in the field. Her résumé lists dozens of articles, books and papers for which she was an author or co-author. But further studies in recent years have cast doubt on some of her findings. Dr. Gino did not immediately respond to a request for comment. As of Tuesday morning, her Harvard webpage remained up and she was listed as being on administrative leave. She is no longer included in Harvard Business School's faculty directory. In a 2012 paper, Dr. Gino showed that people who were paid a small amount of money to solve puzzles were more likely to be honest about how many they had solved if a question about the accuracy of their reports was put at the top of the document instead of the bottom. But in a 2023 blog post at a site about statistical methods called Data Colada, three professors showed that some of the data in the study had been changed in a way that made the result more robust. Dr. Gino was also a co-author in a similar study in which insurance customers who reported the mileage on their cars were more honest if the question was at the top of the form. In a blog post in 2021, the same authors had found that much of the data came from someone connected to the study, not from the customers. In 2023, Harvard Business School put Dr. Gino on unpaid administrative leave, and banned her from campus, she said on her webpage. 'I absolutely did not commit academic fraud,' she said. Last year, she added, 'Once I have the opportunity to prove this in the court of law, with the support of experts I was denied through Harvard's investigation process, you'll see why their case is so weak and that these are bogus allegations. Until then, this is all I can share.' Dr. Gino has filed a lawsuit against Harvard and the bloggers. Last year, defamation claims in that suit were thrown out when a judge ruled that she was a public figure. Other parts of the lawsuit are ongoing. Dr. Gino previously taught at the University of North Carolina and Carnegie Mellon. She earned her undergraduate degree and Ph.D in Italy. She joined Harvard in 2010. Stripping a professor of tenure is rare, and there are no known instances in recent decades at Harvard. The Harvard Crimson reported that no professor had lost their tenure since the rules were formalized in the 1940s. Harvard was shaken in 2023 by accusations in conservative news media outlets of plagiarism by its president, Claudine Gay. She stepped down as president the next year amid those allegations and criticism of her response to antisemitism on campus. Harvard is also embroiled in a high-stakes dispute with the Trump administration, which is looking to cancel all federal government contracts with the university and block it from enrolling international students.


New York Times
14-05-2025
- New York Times
The Professors Are Using ChatGPT, and Some Students Aren't Happy About It
In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor? Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to 'expand on all areas. Be more detailed and specific.' It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example. Ms. Stapleton texted a friend in the class. 'Did you see the notes he put on Canvas?' she wrote, referring to the university's software platform for hosting course materials. 'He made it with ChatGPT.' 'OMG Stop,' the classmate responded. 'What the hell?' Ms. Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of A.I.: distorted text, photos of office workers with extraneous body parts and egregious misspellings. She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade 'academically dishonest activities,' including the unauthorized use of artificial intelligence or chatbots. 'He's telling us not to use it, and then he's using it himself,' she said. Ms. Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of A.I. as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000. When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed A.I. detection services, despite concerns about their accuracy. But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors' overreliance on A.I. and scrutinizing course materials for words ChatGPT tends to overuse, like 'crucial' and 'delve.' In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free. For their part, professors said they used A.I. chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants. Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18 percent described themselves as frequent users of generative A.I. tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The A.I. industry wants to help, and to profit: The start-ups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities. (The Times has sued OpenAI for copyright infringement for use of news content without permission.) Generative A.I. is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Ms. Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain. Making the Grade Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some 'really nice feedback' to give Marie. 'From my perspective, the professor didn't even read anything that I wrote,' said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use A.I. Working at the school was a 'third job' for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher. Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays but used ChatGPT as a guide, which the school permitted. Robert MacAuslan, vice president of A.I. at Southern New Hampshire, said that the school believed 'in the power of A.I. to transform education' and that there were guidelines for both faculty and students to 'ensure that this technology enhances, rather than replaces, human creativity and oversight.' A dos and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, 'in place of authentic, human-centric feedback.' 'These tools should never be used to 'do the work' for them,' Dr. MacAuslan said. 'Rather, they can be looked at as enhancements to their already established processes.' After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university. Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. 'Not a big fan of that,' Dr. Shovlin said, after being told of Marie's experience. Dr. Shovlin is also an A.I. faculty fellow, whose role includes developing the right ways to incorporate A.I. into teaching and learning. 'The value that we add as instructors is the feedback that we're able to give students,' he said. 'It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.' Dr. Shovlin is a proponent of incorporating A.I. into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and 'develop an ethical compass with A.I.,' he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences. 'If you screw up, you're going to be fired,' Dr. Shovlin said. One example he uses in his own classes: In 2023, officials at Vanderbilt University's education school responded to a mass shooting at another university by sending an email to students calling for community cohesion. The message, which described promoting a 'culture of care' by 'building strong relationships with one another,' included a sentence at the end that revealed that ChatGPT had been used to write it. After students criticized the outsourcing of empathy to a machine, the officials involved temporarily stepped down. Not all situations are so clear cut. Dr. Shovlin said it was tricky to come up with rules because reasonable A.I. use may vary depending on the subject. His department, the Center for Teaching, Learning and Assessment, instead has 'principles' for A.I. integration, one of which eschews a 'one-size-fits-all approach.' The Times contacted dozens of professors whose students had mentioned their A.I. use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn't always make sense. They used it to organize their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognize when it hallucinates, or gets facts wrong. There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students' work; others decried the practice. Some emphasized the importance of transparency with students when deploying generative A.I., while others said they didn't disclose its use because of students' skepticism about the technology. Most, however, felt that Ms. Stapleton's experience at Northeastern — in which her professor appeared to use A.I. to generate class notes and slides — was perfectly fine. That was Dr. Shovlin's view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Dr. Shovlin compared it to a longstanding practice in academia of using content, such as lesson plans and case studies, from third-party publishers. To say a professor is 'some kind of monster' for using A.I. to generate slides 'is, to me, ridiculous,' he said. The Calculator on Steroids Shingirai Christopher Kwaramba, a business professor at Virginia Commonwealth University, described ChatGPT as a partner that saved time. Lesson plans that used to take days to develop now take hours, he said. He uses it, for example, to generate data sets for fictional chain stores, which students use in an exercise to understand various statistical concepts. 'I see it as the age of the calculator on steroids,' Dr. Kwaramba said. Dr. Kwaramba said he now had more time for student office hours. Other professors, like David Malan at Harvard, said the use of A.I. meant fewer students were coming to office hours for remedial help. Dr. Malan, a computer science professor, has integrated a custom A.I. chatbot into a popular class he teaches on the fundamentals of computer programming. His hundreds of students can turn to it for help with their coding assignments. Dr. Malan has had to tinker with the chatbot to hone its pedagogical approach, so that it offers only guidance and not the full answers. The majority of 500 students surveyed in 2023, the first year it was offered, said they found it helpful. Rather than spend time on 'more mundane questions about introductory material' during office hours, he and his teaching assistants prioritize interactions with students at weekly lunches and hackathons — 'more memorable moments and experiences,' Dr. Malan said. Katy Pearce, a communication professor at the University of Washington, developed a custom A.I. chatbot by training it on versions of old assignments that she had graded. It can now give students feedback on their writing that mimics her own at any time, day or night. It has been beneficial for students who are otherwise hesitant to ask for help, she said. 'Is there going to be a point in the foreseeable future that much of what graduate student teaching assistants do can be done by A.I.?' she said. 'Yeah, absolutely.' What happens then to the pipeline of future professors who would come from the ranks of teaching assistants? 'It will absolutely be an issue,' Dr. Pearce said. A Teachable Moment After filing her complaint at Northeastern, Ms. Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back. Rick Arrowood, her professor, was contrite about the episode. Dr. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to ChatGPT, the A.I. search engine Perplexity and an A.I. presentation generator called Gamma to 'give them a fresh look.' At a glance, he said, the notes and presentations they had generated looked great. 'In hindsight, I wish I would have looked at it more closely,' he said. He put the materials online for students to review, but emphasized that he did not use them in the classroom, because he prefers classes to be discussion-oriented. He realized the materials were flawed only when school officials questioned him about them. The embarrassing situation made him realize, he said, that professors should approach A.I. with more caution and disclose to students when and how it is used. Northeastern issued a formal A.I. policy only recently; it requires attribution when A.I. systems are used and review of the output for 'accuracy and appropriateness.' A Northeastern spokeswoman said the school 'embraces the use of artificial intelligence to enhance all aspects of its teaching, research and operations.' 'I'm all about teaching,' Dr. Arrowood said. 'If my experience can be something people can learn from, then, OK, that's my happy spot.'