logo
Contradictheory: AI and the next generation

Contradictheory: AI and the next generation

The Star2 days ago

Here's a conversation I don't think we'd have heard five years ago: 'You know what they do? They send in their part of the work, and it's so obviously ChatGPT. I had to rewrite the whole thing!'
This wasn't a chat I had with the COO of some major company but with a 12-year-old child. She was talking about a piece of group work they had to do for class. And this Boy as she called him (you could hear the capitalised italics in her voice) had waited until the last minute to submit his part.
To be honest, I shouldn't be surprised. These days, lots of people use AI in their work. It's normal. According to the 2024 Work Trend Index released by Microsoft and LinkedIn, 75% of employees then used artificial intelligence (AI) to save time and focus on their most important tasks.
But it's not without its problems. An adult using AI to help draft an email is one thing. A student handing in their weekly assignment is another. The adult uses AI to communicate more clearly, but there the student is taking a shortcut. So, in an effort to deliver better work, the child might actually be learning less.
And it's not going away. A 2024 study by Impact Research for the Walton Family Foundation found that 48% of students use ChatGPT at least weekly, representing a jump of 27 percentage points over 2023. And more students use AI chatbots to write essays and assignments (56%) than to study for tests and quizzes (52%).
So what about the other students that don't use AI, like the girl I quoted above? I find they often take a rather antagonistic view. Some kids I talk to (usually the ones already doing well in class) seem to look down on classmates who use AI and, in the process, they look down on AI to do their homework as well.
And I think that's wrong.
As soon as I learned about ChatGPT, I felt that the key to using AI tools well is obvious. It lies in its name: tools. Like a ruler for drawing straight lines, or a dictionary for looking up words, AI chatbots are tools, only more incredibly versatile ones.
One of the biggest problems, of course, is that AI chatbots don't always get their facts right (in AI parlance, they 'hallucinate'). So if you ask it for an essay on 'fastest marine mammal', there's a chance it'll include references to 'sailfish' and 'peregrine falcon'. In one test of AI chatbots, hallucination rates for newer AI systems were as high as 79%.
Even OpenAI, the company behind ChatGPT, isn't immune. Their o3 release hallucinated 33% of the time in their PersonQA benchmark test, which measures how well it answers questions about public figures. The new o4-mini performed even worse, hallucinating 48% of the time.
There are ways to work around this, but I think most people don't know them. For example, many chatbots now have a 'Deep Research' mode that actively searches the internet and presents answers along with sources.
The thing about this is that you, the reasonable, competent, and capable human being, can check the original source to see if it's something you trust. Instead of the machine telling you what it 'knows', it tells you what it found, and it's up to you to verify it.
Another method is to feed the chatbot the materials you want it to use, like a PDF of your textbook or a research paper. Google's NotebookLM is designed for this. It only works with the data you supply, drastically reducing hallucinations. You can then be more sure of the information it produces.
In one stroke, you've turned the chatbot into a hyper-intelligent search engine that not only finds what you're looking for but also understands context, identifies patterns, and helps organise the information.
That's just a small part of what AI can do. But even just helping students find and organise information better is a huge win. And ideally, teachers should lead the charge in classrooms, guiding students on how to work with AI responsibly and effectively.
Instead, many feel compelled to ban it or to try to 'AI-proof' assignments, for example, by demanding handwritten submissions or choosing topics that chatbots are more likely to hallucinate on.
But we can do better. We should allow AI in and teach students how to use it in a way that makes them better. For example, teachers could say that the 'slop' AI generates is the bare minimum. Hand it in as-is, and you'll scrape a C or D. But if you use it to refine your thoughts, to polish your voice, to spark better ideas, then that's where the value lies.
And students can use it to help them revise by getting it to generate quizzes to test themselves with (they, of course, have to verify the answers the AI gives are correct).
Nevertheless, what I've written about so far is about using AI as a tool. The future is about using it as a collaborator. Right now, according to the 2025 Microsoft Work Trend Index, while 50% see it as a command-based tool, 48% of Malaysian workers treat AI as a thought partner. The former issues basic instructions, while the latter has conversations and you have human-machine collaboration.
The report goes on to say explicitly that this kind of partnership is what all employees should strive for when working with AI. That means knowing how to iterate the output given, when to delegate, when to refine the results, and when to push back. In short: the same skills we want kids to learn anyway when working with classmates and teachers.
And the truth is that while I've used AI to find data, summarise reports, and – yes – to proofread this article, I haven't yet actively collaborated with AI. However, the future seems to be heading in that direction. Just a few weeks ago, I wrote about mathematician Terence Tao who predicts that it won't be long until computer proof assistants powered by AI may be cited as co-authors on mathematics papers.
Clearly, I still have a lot to learn about using AI day-to-day. And it's hard. It involves trial and error and wasted effort while battling with looming deadlines. I may deliver inferior work in the meantime that collaborators may have to rewrite. But I remain, as ever, optimistic. Because technology – whether as a tool or a slightly eccentric collaborator – has ultimately the potential to make us and our work better.
Logic is the antithesis of emotion but mathematician-turned-scriptwriter Dzof Azmi's theory is that people need both to make sense of life's vagaries and contradictions. Write to Dzof at lifestyle@thestar.com.my. The views expressed here are entirely the writer's own.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI finds more Chinese groups using ChatGPT for malicious purposes
OpenAI finds more Chinese groups using ChatGPT for malicious purposes

Free Malaysia Today

time3 hours ago

  • Free Malaysia Today

OpenAI finds more Chinese groups using ChatGPT for malicious purposes

OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China. (AFP pic) SAN FRANCISCO : OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence (AI) technology for covert operations, which the ChatGPT maker described in a report released today. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticised US President Donald Trump's sweeping tariffs, generating X posts, such as 'Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?'. In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarised social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a US$40 billion funding round valuing the company at US$300 billion.

Bernama Ai Training Gets Thumbs-ups From East Coast Media Practitioners
Bernama Ai Training Gets Thumbs-ups From East Coast Media Practitioners

Barnama

time5 hours ago

  • Barnama

Bernama Ai Training Gets Thumbs-ups From East Coast Media Practitioners

KUALA TERENGGANU, June 5 (Bernama) -- The 'AI-Powered Content Creation & Editorial Automation' workshop, conducted by the Malaysian National News Agency (Bernama) here, provided media practitioners with insights into how the technology can enhance and support their work processes. Most participants said that the workshop, which began yesterday, provided them with a deeper insight into the wide range of available AI-related tools and applications that can be used in their work. Freelance journalist for Kosmo!, Nursyasya Nadhirah Zabidin, said she learned about methods for producing news content more easily and creatively. 'This kind of training is excellent and extremely helpful, especially for young journalists like me. I've been somewhat familiar with AI before, but not thoroughly. Now, I've learned about many tools I can use to help with writing news scripts. 'However, I'm aware that the accuracy and transparency of information provided by AI applications like ChatGPT must be verified first,' the 29-year-old journalist told Bernama. Head of Current Affairs News at Terengganu Radio Televisyen Malaysia (RTM), Murni Ali, said the training modules were easy to understand and follow. However, she admitted that using AI technology can be challenging for those in her age group. 'We are now in the midst of an industrial revolution, so whether I like it or not, at 50 years old, I still need to learn about technologies like AI,' she said. She also reminded younger journalists not to lose sight of ethics in journalism when leveraging AI technology to ease their workflow. Bernama Kelantan Bureau chief Muhammad Afiq Mohd Asri said the training opened a new perspective for him on how to harness technology to enhance work efficiency, from content planning to final publication.

OpenAI finds more Chinese groups using ChatGPT for malicious purposes
OpenAI finds more Chinese groups using ChatGPT for malicious purposes

The Sun

time6 hours ago

  • The Sun

OpenAI finds more Chinese groups using ChatGPT for malicious purposes

SAN FRANCISCO: OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts, such as 'Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?'. In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store