logo
#

Latest news with #RateMyProfessor

Trump Targeted In Texas A&M Political Science Course Material
Trump Targeted In Texas A&M Political Science Course Material

Yahoo

time3 days ago

  • Politics
  • Yahoo

Trump Targeted In Texas A&M Political Science Course Material

(Texas Scorecard) – A Texas A&M summer class uses a textbook that promotes the establishment narrative that President Donald Trump is a criminal. A source provided Texas Scorecard with information regarding the textbook for a Texas A&M political science course offered this summer semester. The textbook in question is the 11th edition of 'Keeping the Republic: Power and Citizenship in American Politics,' by Christine Barbour and Gerald C. Wright. Wright is an emeritus professor at Indiana University Bloomington. Barbour is his wife and a political science lecturer at the same university. Barbour and Wright sharply contrasted how they believed the country viewed former President Joe Biden and President Donald Trump when presenting them in the first chapter. 'When President Biden was elected in 2020, we thought we had turned a new page in our political history,' Barbour and Wright wrote in the first chapter. 'But so much of the country's attention remained on Donald Trump, who demanded the limelight during his presidency and refused to relinquish it, as well as political power.' The first chapter contained more anti-Trump messaging. The authors repeated the establishment media narrative that President Donald Trump is a criminal but didn't mention the politicization and manipulation of the prosecution against him. 'Donald Trump is okay with rules that constrain other people's behavior, but he chafes under rules that apply to him. There is a reason why, when he left office in 2021, he faced a barrage of lawsuits and criminal indictments at the state and federal level, and that reason was not that his political enemies wanted to go after him,' Barbour and Wright wrote. 'It's because he broke or ignored multiple laws he didn't want to follow or that he decided didn't apply to him, and some of the consequences caught up with him.' 'Donald Trump doesn't like to be bound by rules, even the ones written in the Constitution,' the authors continued. This textbook is required reading in American National Government, a political science course at Texas A&M offered during the summer semester from May 26 to July 4 of this year. The Bush School of Government & Public Service houses Texas A&M College Station's political science department. Named after former President George H. W. Bush, members of the Bush family serve on the school's advisory board, including former Texas Land Commissioner George P. Bush. Neil Bush, son of George H.W. Bush, is board chair. Neil Bush is also the founder and chairman of the George H.W. Bush Foundation for U.S.-China Relations. Use of Barbour and Wright's textbook has not been confined to the College Station campus. Dr. Shane Gleason used the ninth edition of the textbook in a Spring 2022 political science class at Texas A&M Corpus Christi. Meanwhile, former Texas A&M Galveston professor John Carhart praised an earlier version of the book. Several of Carhart's student reviews on RateMyProfessor claim he had a very liberal bias in the classroom. Other universities have used earlier versions of this textbook. Previous versions were used at Stephen F. Austin State University in Fall 2014, and at the University of North Texas in Fall 2016 and Spring 2017. Texas A&M did not respond to a request for comment before publication.

The professors are using ChatGPT, and some students aren't happy about it
The professors are using ChatGPT, and some students aren't happy about it

Boston Globe

time14-05-2025

  • Boston Globe

The professors are using ChatGPT, and some students aren't happy about it

'Did you see the notes he put on Canvas?' she wrote, referring to the university's software platform for hosting course materials. 'He made it with ChatGPT.' 'OMG Stop,' the classmate responded. 'What the hell?' Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of artificial intelligence: distorted text, photos of office workers with extraneous body parts, and egregious misspellings. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up Ella Stapleton filed a formal complaint with Northeastern University over a professor's undisclosed use of AI. OLIVER HOLMS/NYT Advertisement She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade 'academically dishonest activities,' including the unauthorized use of AI or chatbots. 'He's telling us not to use it, and then he's using it himself,' she said. Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of AI, as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000. Advertisement When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it, while others deployed AI detection services, despite concerns about their accuracy. How the tables have turned. Now students are complaining on sites, such as Rate My Professor, about their instructors' overreliance on AI and scrutinizing course materials for words ChatGPT tends to overuse, like 'crucial' and 'delve.' In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free. For their part, professors said they used AI chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads, and served as automated teaching assistants. Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18 percent described themselves as frequent users of generative AI tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The AI industry wants to help, and to profit: The startups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities. (The Times has sued OpenAI for copyright infringement for use of news content without permission.) Generative AI is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain. Advertisement Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some 'really nice feedback' to give Marie. 'From my perspective, the professor didn't even read anything that I wrote , " said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use AI. Working at the school was a 'third job' for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher. Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays, but used ChatGPT as a guide, which the school permitted. Robert MacAuslan, vice president of AI at Southern New Hampshire, said that the school believed 'in the power of AI to transform education' and that there were guidelines for both faculty and students to 'ensure that this technology enhances, rather than replaces, human creativity and oversight.' A do's and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, 'in place of authentic, human-centric feedback.' 'These tools should never be used to 'do the work' for them,' MacAuslan said. 'Rather, they can be looked at as enhancements to their already established processes.' Advertisement After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university. Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. 'Not a big fan of that,' Shovlin said, after being told of Marie's experience. Shovlin is also an AI faculty fellow, whose role includes developing the right ways to incorporate AI into teaching and learning. 'The value that we add as instructors is the feedback that we're able to give students,' he said . 'It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.' Shovlin is a proponent of incorporating AI into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and 'develop an ethical compass with AI,' he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences. 'If you screw up, you're going to be fired,' Shovlin said. The Times contacted dozens of professors whose students had mentioned their AI use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn't always make sense. They used it to organize their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognize when it hallucinates, or gets facts wrong. There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students' work; others decried the practice. Some emphasized the importance of transparency with students when deploying generative AI, while others said they didn't disclose its use because of students' skepticism about the technology. Advertisement Most, however, felt that Stapleton's experience at Northeastern — in which her professor appeared to use AI to generate class notes and slides — was perfectly fine. That was Shovlin's view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Shovlin compared it with a long-standing practice in academia of using content, such as lesson plans and case studies, from third-party publishers. To say a professor is 'some kind of monster' for using AI to generate slides 'is, to me, ridiculous,' he said. After filing her complaint at Northeastern, Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back. Rick Arrowood, her professor, was contrite about the episode. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to ChatGPT, the AI search engine Perplexity, and an AI presentation generator called Gamma to 'give them a fresh look.' At a glance, he said, the notes and presentations they had generated looked great. 'In hindsight, I wish I would have looked at it more closely,' he said. He put the materials online for students to review, but emphasized that he did not use them in the classroom, because he prefers classes to be discussion-oriented. He realized the materials were flawed only when school officials questioned him about them. Advertisement The embarrassing situation made him realize, he said, that professors should approach AI with more caution and disclose to students when and how it is used. Northeastern issued a formal AI policy only recently; it requires attribution when AI systems are used and review of the output for 'accuracy and appropriateness.' A Northeastern spokesperson said the school 'embraces the use of artificial intelligence to enhance all aspects of its teaching, research, and operations.' 'I'm all about teaching,' Arrowood said. 'If my experience can be something people can learn from, then, OK, that's my happy spot.' This article originally appeared in

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store