
Why US colleges are turning to blue book exams to fight ChatGPT cheating
As artificial intelligence tools like ChatGPT reshape the way students study and complete assignments, many US colleges are taking a surprising step backward—toward pen, paper, and the iconic blue exam booklet.
The humble blue book, first introduced in the late 1920s, is making a powerful comeback as educators look for ways to counteract a growing wave of AI-assisted academic dishonesty.
According to The Wall Street Journal, blue book sales have surged in recent years, fueled by growing concern among professors about students using ChatGPT to complete take-home tests, write essays, and finish homework. While many students see AI as a helpful tool, educators are raising red flags about the integrity of academic work in the AI era.
A return to handwritten testing across US campuses
Roaring Spring Paper Products, the family-owned company that manufactures most blue books, reported a significant rise in demand. Sales are up more than 30% at Texas A&M University, nearly 50% at the University of Florida, and 80% at the University of California, Berkeley over the past two years, according to The Wall Street Journal. Each blue book costs just 23 cents in campus bookstores, making it a simple but effective tool for in-person, supervised exams.
Professors are using the blue book to create AI-proof environments. As reported by The Wall Street Journal, Yale University lecturer Kevin Elliott adopted the blue book after discovering that some students had submitted essays with fabricated quotes from famous philosophers—a clear sign of AI-generated content. Elliott replaced take-home papers with in-class, handwritten blue book exams and told The Wall Street Journal the change worked so well that he plans to continue the approach next academic year.
Faculty are worried, and students are using AI widely
A January 2023 survey by Study.com, as cited by The Wall Street Journal, revealed that nearly 90% of college students admitted using ChatGPT for homework, 53% used it to write an essay, and 48% used it during an at-home test or quiz. Another survey conducted in January by the American Association of Colleges and Universities and Elon University found that 59% of US college leaders believe cheating has increased since AI tools became widely available.
Over half of those surveyed also said their faculty struggle to tell the difference between AI-written and student-written work.
More than 70% of college professors expressed concern about how ChatGPT is impacting academic integrity, according to The Wall Street Journal. Still, some faculty recognize the complexity of banning a tool that will be widely used in professional settings. Arthur Spirling, a politics professor at Princeton University, told The Wall Street Journal that although he gives proctored blue book exams, he finds it 'strange' to ban a technology students will use in their careers.
'It is strange to say you won't be permitted to do this thing that will be very natural to you for the rest of your career,' he was quoted as saying by The Wall Street Journal.
Balancing tradition with digital-age skills
The shift to blue books is not without controversy. While some educators see it as necessary to preserve academic honesty, others question whether avoiding AI in the classroom prepares students for real-world work environments where AI tools like ChatGPT are likely to be commonplace. As of April, ChatGPT had 500 million global weekly users, up from 400 million in February, according to The Wall Street Journal.
With AI tools becoming more powerful and accessible, the debate over their role in US education continues to intensify. But for now, the blue book stands as a symbol of the analog fight for academic integrity in the digital age.
Ready to navigate global policies? Secure your overseas future. Get expert guidance now!
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
17 hours ago
- Time of India
Pregnant Bangladeshi prisoner escapes from JJ hospital in Mumbai
AI Image NEW DELHI: A 21-year-old Bangladeshi woman who had been arrested for allegedly staying in India illegally escaped from Mumbai's JJ Hospital after pushing aside a police constable, officials said. According to news agency PTI, the woman, identified as Rubina Irshad Shaikh, was five months pregnant. Police said she fled from the state-run hospital on Thursday, where she had been taken from Byculla Women's Jail for medical treatment. You Can Also Check: Mumbai AQI | Weather in Mumbai | Bank Holidays in Mumbai | Public Holidays in Mumbai | Gold Rates Today in Mumbai | Silver Rates Today in Mumbai 'She was held in Navi Mumbai on 5 August for illegally staying in the country. She was lodged in Byculla Jail and was brought to JJ Hospital for treatment of fever and skin ailments,' a police official told reporters. Shaikh was booked under the Bharatiya Nyaya Sanhita for escaping custody. She also faces charges under the Passport Act and the Foreigners Act. A search operation has been launched to trace her, police added. Stay updated with the latest local news from your city on Times of India (TOI). Check upcoming bank holidays , public holidays , and current gold rates and silver prices in your area. Get the latest lifestyle updates on Times of India, along with Happy Krishna Janmashtami Wishes ,, messages , and quotes !


Hindustan Times
a day ago
- Hindustan Times
Who was Joaquin Oliver? AI brings Parkland shooting victim's voice back for gun reform
It has been over seven years since Joaquin Oliver was killed at Marjory Stoneman Douglas High School in Parkland, Florida. He was one of 17 victims shot at the school in 2018, according to Axios. AI Joaquin spoke in favor of "stronger gun control laws, mental health support and community engagement."(@OfficialJoelF/ X) Joaquin's parents are now using AI to give their son a new voice. They plan to use it as part of their campaign against gun violence. Who was Joaquin Oliver? Joaquin Oliver was a 17-year-old student killed in the 2018 school shooting at Marjory Stoneman Douglas High School in Parkland, Florida, per Change the Ref. Born in Venezuela on August 4, 2000, he moved to the US with his family at the age of three. He became a US citizen in 2017. His friends and family called him 'Guac.' He was known for being funny, kind, and creative. On Monday, an AI version of Joaquin appeared on The Jim Acosta Show. The AI Joaquin spoke in favor of 'stronger gun control laws, mental health support and community engagement.' 'I was taken from this world too soon due to gun violence while at school. It's important to talk about these issues so we can create a safer future for everyone.' AI Joaquin mimics Joaquin's voice and personality Joaquin's father, Manuel Oliver, told Acosta that the "AI Joaquin" uses his son's voice and his past social media posts and writings to copy his personality. The AI clone is also trained on 'knowledge that is out there through artificial intelligence,' Oliver said. Also Read: Five Indian-origin men charged with murder of Indian man in US; A look into Kuldeep Kumar's death probe Oliver family plans AI Joaquin's social media and speeches Oliver and his wife Patricia co-founded the gun violence advocacy group Change the Ref. On the show, Manuel said they want AI Joaquin to have his own social media accounts and eventually speak in person. "Now Joaquin is going to start having followers. It's not Manny, it's not Patricia. He's going to start uploading videos." "Moving forward, we will have Joaquin on stage in the middle of a debate." Oliver said that on a personal level, he and his wife cherish being able to hear Joaquin's voice again. "Patricia will spend hours asking questions. Like any other mother, she loves to hear Joaquin say, 'I love you, Mami.'" But Oliver also said, "I understand that this is AI. I don't want anyone to think that I am in some way trying to bring my son back. Sadly, I can't." Criticism against the AI video The interview caused strong reactions online. A Washington Post story said many people criticized the use of AI to bring back someone who has died. Hany Farid, a professor at the University of California, Berkeley, who studies digital forensics, told the Post, "This sort of interview style can't possibly represent what that child wants to say in any reasonable way." "There are plenty of opportunities to talk to real victims and have a serious conversation about this epidemic that's happening in our country without resorting to this sort of stunt." This is not the first time the Olivers have used AI to share Joaquin's message. In 2020, an AI Joaquin appeared in a video promoting voter registration. In it, he talked about not being able to vote in the presidential election and his frustration over the lack of action to stop shootings. Oliver responded to critics in an Instagram video, saying Joaquin "has a lot of things to say" and "thanks to AI, we can bring him back." "If the problem that you have is with the AI, then you have the wrong problem. The real problem is that my son was shot eight years ago. So if you believe that is not the problem, you are part of the problem."

Hindustan Times
2 days ago
- Hindustan Times
Australian lawyer apologises for AI-generated ‘fictitious quotes' in murder case
A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. Australian lawyer apologises in court for AI-generated fake quotes and case citations in a murder trial.(Representational image/ REUTERS) The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took 'full responsibility' for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. 'We are deeply sorry and embarrassed for what occurred,' Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. 'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday. 'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. 'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison.