
Copy, paste, forget
"It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago.
"They weren't even coming up with their own anecdotal stories about their own lives," she told AFP.
Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester — including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling".
So she was not surprised by recent research which suggested that students who use ChatGPT to write essays engage in less critical thinking.
The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators.
The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP.
'Soulless' AI essays
For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains.
The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays.
The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often.
And more than 80 per cent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 per cent of the other two groups.
By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting.
The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight.
However Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid.
She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity.
Kosmyna emphasised it was too early to draw conclusions from the study's small sample size but called for more research into how AI tools could be used more carefully to help learning.
Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint.
"This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP.
Thinking outside the bot
Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common.
Sometimes students do not even change the font when they copy and paste from ChatGPT, she said.
But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others.
The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways.
But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning.
A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas.
"I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said.
The problem goes beyond high school and university students.
Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year.
"Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
9 hours ago
- Express Tribune
Foxconn reports record Q2 revenue
A Foxconn flag is seen at the company's electric vehicle production facility in Lordstown, Ohio, U.S. November 30, 2022. REUTERS/Quinn Glabicki Listen to article Taiwan's Foxconn, the world's largest contract electronics maker, reported record second-quarter revenue on strong demand for artificial intelligence products but cautioned about geopolitical and exchange rate headwinds. Revenue for Apple's biggest iPhone assembler jumped 15.82% year-on-year to T$1.797 trillion, Foxconn said in a statement on Saturday, beating the T$1.7896 trillion LSEG SmartEstimate, which gives greater weight to forecasts from analysts who are more consistently accurate. Robust AI demand led to strong revenue growth for its cloud and networking products division, said Foxconn, whose customers include AI chip firm Nvidia. Smart consumer electronics, which includes iPhones, posted "flattish" year-on-year revenue growth affected by exchange rates, it said. June revenue roses 10.09% on year to T$540.237 billion, a record high for that month. Foxconn said it anticipates growth in this quarter from the previous three months and from the same period last year but cautioned about potential risks to growth. "The impact of evolving global political and economic conditions and exchange rate changes will need continued close monitoring," it said without elaborating. US President Donald Trump said he had signed letters to 12 countries outlining the various tariff levels they would face on goods they export to the United States, with the "take it or leave it" offers to be sent out on Monday. The Chinese city of Zhengzhou is home to the world's largest iPhone manufacturing facility, operated by Foxconn. The company, formally called Hon Hai Precision Industry, does not provide numerical forecasts. It will report full second quarter earnings on August 14.


Express Tribune
a day ago
- Express Tribune
India plans $230m drone incentive
Industrial disasters are common in India, with experts blaming poor planning and lax enforcement of safety rules. PHOTO: AFP Listen to article India will launch a $234 million incentive programme for civil and military drone makers to reduce their reliance on imported components and counter rival Pakistan's programme built on support from China and Turkey, three sources told Reuters. New Delhi will launch a 20 billion Indian rupees ($234 million) programme for three years that will cover manufacturing of drones, components, software, counter drone systems, and services, two government and one industry source, who did not want to be named, told Reuters. Through the incentives, India is aiming to have at least 40% of key drone components made in the country by the end of fiscal year 2028 (April-March), the two government sources said. Reuters previously reported that India plans to invest heavily in local industry and could spend as much as $470 million on unmanned aerial vehicles over the next 12 to 24 months, in what government and military officers said would be a staggered approach.


Express Tribune
a day ago
- Express Tribune
Copy, paste, forget
When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories. "It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago. "They weren't even coming up with their own anecdotal stories about their own lives," she told AFP. Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester — including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling". So she was not surprised by recent research which suggested that students who use ChatGPT to write essays engage in less critical thinking. The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators. The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP. 'Soulless' AI essays For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains. The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays. The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often. And more than 80 per cent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 per cent of the other two groups. By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting. The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight. However Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid. She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity. Kosmyna emphasised it was too early to draw conclusions from the study's small sample size but called for more research into how AI tools could be used more carefully to help learning. Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint. "This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP. Thinking outside the bot Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common. Sometimes students do not even change the font when they copy and paste from ChatGPT, she said. But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others. The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways. But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning. A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas. "I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said. The problem goes beyond high school and university students. Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year. "Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.