Latest news with #DigitalEducationCouncil
Yahoo
10-05-2025
- Yahoo
Teachers Using AI to Grade Their Students' Work Sends a Clear Message: They Don't Matter, and Will Soon Be Obsolete
Talk to a teacher lately, and you'll probably get an earful about AI's effects on student attention spans, reading comprehension, and cheating. As AI becomes ubiquitous in everyday life — thanks to tech companies forcing it down our throats — it's probably no shocker that students are using software like ChatGPT at a nearly unprecedented scale. One study by the Digital Education Council found that nearly 86 percent of university students use some type of AI in their work. That's causing some fed-up teachers to fight fire with fire, using AI chatbots to score their students' work. As one teacher mused on Reddit: "You are welcome to use AI. Just let me know. If you do, the AI will also grade you. You don't write it, I don't read it." Others are embracing AI with a smile, using it to "tailor math problems to each student," in one example listed by Vice. Some go so far as requiring students to use AI. One professor in Ithaca, NY, shares both ChatGPT's comments on student essays as well as her own, and asks her students to run their essays through AI on their own. While AI might save educators some time and precious brainpower — which arguably make up the bulk of the gig — the tech isn't even close to cut out for the job, according to researchers at the University of Georgia. While we should probably all know it's a bad idea to grade papers with AI, a new study by the School of Computing at UG gathered data on just how bad it is. The research tasked the Large Language Model (LLM) Mixtral with grading written responses to middle school homework. Rather than feeding the LLM a human-created rubric, as is usually done in these studies, the UG team tasked Mixtral with creating its own grading system. The results were abysmal. Compared to a human grader, the LLM accurately graded student work just 33.5 percent of the time. Even when supplied with a human rubric, the model had an accuracy rate of just over 50 percent. Though the LLM "graded" quickly, its scores were frequently based on flawed logic inherent to LLMs. "While LLMs can adapt quickly to scoring tasks, they often resort to shortcuts, bypassing deeper logical reasoning expected in human grading," wrote the researchers. "Students could mention a temperature increase, and the large language model interprets that all students understand the particles are moving faster when temperatures rise," said Xiaoming Zhai, one of the UG researchers. "But based upon the student writing, as a human, we're not able to infer whether the students know whether the particles will move faster or not." Though the UG researchers wrote that "incorporating high-quality analytical rubrics designed to reflect human grading logic can mitigate [the] gap and enhance LLMs' scoring accuracy," a boost from 33.5 to 50 percent accuracy is laughable. Remember, this is the technology that's supposed to bring about a "new epoch" — a technology we've poured more seed money into than any in human history. If there were a 50 percent chance your car would fail catastrophically on the highway, none of us would be driving. So why is it okay for teachers to take the same gamble with students? It's just further confirmation that AI is no substitute for a living, breathing teacher, and that isn't likely to change anytime soon. In fact, there's mounting evidence that AI's comprehension abilities are getting worse as time goes on and original data becomes scarce. Recent reporting by the New York Times found that the latest generation of AI models hallucinate as much as 79 percent of the time — way up from past numbers. When teachers choose to embrace AI, this is the technology they're shoving off onto their kids: notoriously inaccurate, overly eager to please, and prone to spewing outright lies. That's before we even get into the cognitive decline that comes with regular AI use. If this is the answer to the AI cheating crisis, then maybe it'd make more sense to cut out the middle man: close the schools and let the kids go one-on-one with their artificial buddies. More on AI: People With This Level of Education Use AI the Most at Work
Yahoo
26-01-2025
- Science
- Yahoo
Opinion - To short-circuit the higher education AI apocalypse, we must embrace generative AI
The rapid improvement of generative AI tools has led many of my peers to proclaim that higher education as we know it has come to a crashing and shocking end. I agree. In my large-enrollment general education course at the University of Florida, I can no longer assign an essay asking students to state their views on genetic engineering and assume the responses I receive are written by humans. So the critical question we must ask as academics is, 'What do we do now?' Rather than try to create assignments that AI cannot tackle, I propose we develop assignments that embrace AI text generation. We don't want to ignore the 54 percent of students who use AI at least weekly in their course assignments, according to the Digital Education Council. We don't want to ban AI. And even when we, as educators, try to trick AI tools, newer versions of ChatGPT just come along to thwart that strategy. With this in mind, I modified the final assignment in my course to require that students submit an entirely AI-generated first draft, which they then modified to reflect their own perspectives. In the first couple of semesters using this strategy, students color-coded the sources of text to mark which parts were human-generated and which were AI-generated. This strategy allowed students to use and reflect on how they would utilize AI in the future. Tracking of text origin was further streamlined using the recently released 'Authorship' tool from Grammarly, which accurately attributes text as 'typed by a human' or 'copied from a source/AI-generated.' Advancements in technology have upended the careful development of assessments in higher education before and will continue to in the future, even if AI appears to be an all-encompassing, do-everything tool. For those of us born in the 1970s, we remember a time before the ever-present calculator. Math teachers could assign long-division problems without worrying that students who came up with the correct answer did not understand the methods required to generate the answer. More recently, language translation, a key learning tool in language acquisition, was upended over a few days in 2016 by the release of a new version of Google Translate. The rapid improvement in Google Translate bears similar parallels to how ChatGPT 3.5 burst into the consciousness of a large portion of the population in November 2022. In both cases, educators eventually embraced and used these new tools to improve student learning outcomes. While requiring a GenAI first draft of an assignment is not a model that will work in all situations, 'showing the work' and student reflection can play key roles in student assessment. My twin high school seniors possess graphing calculators that are more powerful than the computer on which I wrote my dissertation. So I have observed firsthand how educators have modified assessments to adjust for such changes, emphasizing the processes needed to answer the assignment more than the final answer. Language teachers have pivoted to incorporate student reflections on why one word was chosen over another, for example. In my course, Part B of the final assignment requires students to reflect on how well (or poorly) the initial AI draft reflected their views on the assigned topic. I acknowledge that students with access to AI during the reflection portion of assignments could use the tool to show how they produced the 'work.' Tools that track AI usage, like the 'Authorship' tool, hold promise for providing both instructors and students with information on where and how much AI text was used in an assignment. The capability of AI to generate text (and images) will keep advancing, becoming increasingly integrated into the daily lives of both us and our students. In our professional lives, it will be capable of responding to the most imaginative essay prompts educators can design. By shifting the focus of assignments from pure content creation to critical engagement, analysis and editing, we will teach our students how to think creatively, collaborate and communicate their ideas effectively and responsibly. These are the same skills they need to master to successfully work in teams and communicate efficiently in their future careers. Brian Harfe, Ph.D., is a professor in the College of Medicine and associate provost at the University of Florida. He runs 14 international exchange programs and a study abroad program while teaching about 450 students each semester. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


The Hill
26-01-2025
- Science
- The Hill
To short-circuit the higher education AI apocalypse, we must embrace generative AI
The rapid improvement of generative AI tools has led many of my peers to proclaim that higher education as we know it has come to a crashing and shocking end. I agree. In my large-enrollment general education course at the University of Florida, I can no longer assign an essay asking students to state their views on genetic engineering and assume the responses I receive are written by humans. So the critical question we must ask as academics is, 'What do we do now?' Rather than try to create assignments that AI cannot tackle, I propose we develop assignments that embrace AI text generation. We don't want to ignore the 54 percent of students who use AI at least weekly in their course assignments, according to the Digital Education Council. We don't want to ban AI. And even when we, as educators, try to trick AI tools, newer versions of ChatGPT just come along to thwart that strategy. With this in mind, I modified the final assignment in my course to require that students submit an entirely AI-generated first draft, which they then modified to reflect their own perspectives. In the first couple of semesters using this strategy, students color-coded the sources of text to mark which parts were human-generated and which were AI-generated. This strategy allowed students to use and reflect on how they would utilize AI in the future. Tracking of text origin was further streamlined using the recently released 'Authorship' tool from Grammarly, which accurately attributes text as 'typed by a human' or 'copied from a source/AI-generated.' Advancements in technology have upended the careful development of assessments in higher education before and will continue to in the future, even if AI appears to be an all-encompassing, do-everything tool. For those of us born in the 1970s, we remember a time before the ever-present calculator. Math teachers could assign long-division problems without worrying that students who came up with the correct answer did not understand the methods required to generate the answer. More recently, language translation, a key learning tool in language acquisition, was upended over a few days in 2016 by the release of a new version of Google Translate. The rapid improvement in Google Translate bears similar parallels to how ChatGPT 3.5 burst into the consciousness of a large portion of the population in November 2022. In both cases, educators eventually embraced and used these new tools to improve student learning outcomes. While requiring a GenAI first draft of an assignment is not a model that will work in all situations, 'showing the work' and student reflection can play key roles in student assessment. My twin high school seniors possess graphing calculators that are more powerful than the computer on which I wrote my dissertation. So I have observed firsthand how educators have modified assessments to adjust for such changes, emphasizing the processes needed to answer the assignment more than the final answer. Language teachers have pivoted to incorporate student reflections on why one word was chosen over another, for example. In my course, Part B of the final assignment requires students to reflect on how well (or poorly) the initial AI draft reflected their views on the assigned topic. I acknowledge that students with access to AI during the reflection portion of assignments could use the tool to show how they produced the 'work.' Tools that track AI usage, like the 'Authorship' tool, hold promise for providing both instructors and students with information on where and how much AI text was used in an assignment. The capability of AI to generate text (and images) will keep advancing, becoming increasingly integrated into the daily lives of both us and our students. In our professional lives, it will be capable of responding to the most imaginative essay prompts educators can design. By shifting the focus of assignments from pure content creation to critical engagement, analysis and editing, we will teach our students how to think creatively, collaborate and communicate their ideas effectively and responsibly. These are the same skills they need to master to successfully work in teams and communicate efficiently in their future careers. Brian Harfe, Ph.D., is a professor in the College of Medicine and associate provost at the University of Florida. He runs 14 international exchange programs and a study abroad program while teaching about 450 students each semester.