logo
‘Writing is thinking': Do students who use ChatGPT learn less?

‘Writing is thinking': Do students who use ChatGPT learn less?

Straits Times2 days ago
Sign up now: Get ST's newsletters delivered to your inbox
A recent study suggests students who use ChatGPT to write essays engage in less critical thinking.
PARIS - When Professor Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories.
'It was very clear that ChatGPT had decided this is a common woman's name,' said Prof Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago.
'They weren't even coming up with their own anecdotal stories about their own lives,' she told AFP.
Prof Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester – including when writing about the ethics of artificial intelligence (AI), which she called both 'ironic' and 'mind-boggling'.
So she was not surprised by recent research which suggested that students who use ChatGPT to write essays engage in less critical thinking.
The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators.
The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online in June, lead author Nataliya Kosmyna told AFP.
Top stories
Swipe. Select. Stay informed.
Singapore 3 out of 4 in Singapore cannot identify deepfake content: Cyber Security Agency survey
Singapore GrabCab, Singapore's newest taxi operator, hits the roads with over 40 cabs to be rolled out in July
Life Star Awards 2025: Xiang Yun and Chen Hanwei are the most decorated actors in show's 30-year history
Business Cathay Cineplexes gets demand for $3.4 million in arrears from Jem landlord
Asia Dalai Lama says he will have successor after his death
Sport FAS introduces 'enhancements' to SPL, with increase in prize money and foreign player quota
Singapore 3,800 private candidates in Singapore to take O- and A-level exams in 2025
Singapore Teen admits assaulting cop, fracturing officer's nose
'Soulless' AI essays
For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains.
The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays.
The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often.
And more than 80 per cent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 per cent of the other two groups.
By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting.
The teachers said they could easily spot the 'soulless' ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight.
However Dr Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid.
She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity.
Dr Kosmyna emphasised it was too early to draw conclusions from the study's small sample size but called for more research into how AI tools could be used more carefully to help learning.
Dr Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some 'offbase' headlines that wrongly extrapolated from the preprint.
'This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains,' she told AFP.
Thinking outside the bot
Prof Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common.
Sometimes students do not even change the font when they copy and paste from ChatGPT, she said.
But Prof Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others.
The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways.
But Prof Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning.
A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas.
'I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for,' he said.
The problem goes beyond high school and university students.
Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year.
'Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?' Prof Leitzinger asked. AFP
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Artificial intelligence rules to go ahead, no pause, EU Commission says
Artificial intelligence rules to go ahead, no pause, EU Commission says

CNA

time2 hours ago

  • CNA

Artificial intelligence rules to go ahead, no pause, EU Commission says

BRUSSELS :The European Union's landmark rules on artificial intelligence will be implemented according to the timeline agreed by legislators and EU countries, the European Commission said on Friday, dismissing suggestions and pressure from some companies and several EU countries for a temporary pause. "I've seen, indeed, a lot of reporting, a lot of letters and a lot of things being said on the AI act. Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause," Commission spokesperson Thomas Regnier told a press conference. "We have legal deadlines established in a legal text. The provisions kicked in February, general purpose AI model obligations will begin in August, and next year, we have the obligations for high risk models that will kick in in August, 2026," he said.

Think like criminals, anticipate cyber attack tactics: Experts
Think like criminals, anticipate cyber attack tactics: Experts

Straits Times

time2 hours ago

  • Straits Times

Think like criminals, anticipate cyber attack tactics: Experts

Sign up now: Get ST's newsletters delivered to your inbox SINGAPORE - Organisations need to think like cyber criminals and proactively run simulated real-world attacks internally to better prepare their cyber defenders in the age of artificial intelligence (AI), speakers at a cybersecurity conference said. 'Globally, we are seeing growing interest, and increasingly brazen activity, against critical infrastructure: ports, utilities, telecommunications, transport networks,' said Singapore's Defence Cyber Chief on July 4. 'Not perhaps for immediate disruption or commercial gain, but to map dependencies, identify vulnerabilities and pre-position for possible future leverage,' Colonel (COL) Clarence Cai. 'Now add AI to the mix — and this threat landscape goes on steroids.' Urging organisations to use AI to anticipate attack tactics, he said: 'This mindset shift might be the most important variable in the fight to come.' COL Cai was one of several speakers at ST Engineering's Cybersecurity Summit 2025, which brought together industry leaders to discuss AI-powered cyber attacks, and plans to stay ahead in an increasingly complex threat landscape. Cyber attackers can run countless permutations on how to breach systems, considering factors such as credentials to unlock systems and potential points of exploitation, he said. In sharp contrast, defenders often think in a more checklist-oriented manner, he said during his keynote address at the event. He added that this fundamental difference gives attackers the upper hand - unless defenders learn to adopt the dynamism that attackers have, which would level the playing field. He said that the mindset shift has already taken place in the military, citing what is being done at Singapore's Cyber Defence Test and Evaluation Centre (CyTEC). CyTEC was established in 2015, and falls under the banner of the Defence Cyber Command (DCCOM). It provides a virtual sandbox environment that simulates cyber threat scenarios to test defenders' skills and responses. 'CyTEC is where our most promising cyber minds—including national servicemen—develop AI-native workflows for red teaming and defence,' he said. Red teaming is when ethical hackers simulate cyber attacks so organisations can test the effectiveness of their cybersecurity system. In his speech, COL Cai said defence is no longer just about where uniformed personnel meet, but also about where vulnerabilities exist. People and commercial organisations face non-kinetic, military-grade threats, said COL Cai, citing the cyber attack on Marks & Spencers earlier in the year which resulted in an estimated £300 million (S$522 million) loss in profit. 'The T-72 (battle tank) never rolled in - but the digital equivalent of a precision strike did,' he added. Military conflicts also no longer just target weapon platforms, but also the digital infrastructure that people rely on, said COL Cai. He said: 'During recent hostilities between Israel and Iran, the world watched hypersonic missiles light up the night sky. But in the shadows, cyber attacks disrupted air traffic, spoofed the global positioning system, and interfered with hospital and financial services. 'These attacks weren't just at the front line, they were levelled at the foundations of modern society.' Urging all organisations, including small and medium-sized enterprises (SMEs) to prepare early and systematically, the chief executive of Cyber Security Agency (CSA) of Singapore David Koh likened cybersecurity to brakes on a car. 'If you want to go fast, you need good brakes. If you want to digitalise, you need good cybersecurity,' said Mr Koh. He added that digital resilience is a basic requirement for everybody, to succeed in an increasingly digital economy. 'When the security posture of SMEs is strengthened, then there is a multiplicative effect. The protection extends beyond the SMEs themselves into the wider supply chain ecosystem, which all of us depend on.' During the summit, Senior Minister of State for Digital Development and Information Tan Kiat How said Singapore has built a strong cybersecurity foundation. He cited initiatives such as the Infocomm Media Development Authority's CTO-as-a-Service, which gives SMEs access to expert digital guidance, and CSA's CISO-as-a-Service, which helps SMEs overcome resource constraints and knowledge barriers they may face in implementing cybersecurity. However, he added that consistently building on this foundation is important to keep up with the evolving threat landscape. Eight in 10 Singapore organisations had experienced a cybersecurity incident in 2023, said Mr Tan, citing CSA's Singapore Cybersecurity Health Report 2023. Of these, 99 per cent reported suffering business impacts, with the top three being business disruption, data loss, and reputational damage. 'Some SMEs hold the view that they are too small or too unimportant to be targeted,' said Mr Tan, adding that he hears this feedback sometimes from SMEs. 'I often explain that threat actors may target small companies as a way to reach bigger targets,' he added. 'And when one firm is compromised, the impact may cascade through the broader industry sector - potentially disrupting services, leaking data, and even compromising national infrastructure,' he said, urging firms to up their game. 'Let us not be remembered for what we failed to protect, but for what we had the foresight to be prepare for.'

ChatGPT's mental health costs are adding up
ChatGPT's mental health costs are adding up

Straits Times

time4 hours ago

  • Straits Times

ChatGPT's mental health costs are adding up

Sign up now: Get ST's newsletters delivered to your inbox From brain rot to induced psychosis, the psychological cost of generative AI is growing and flying under the radar. The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store