logo
Contradictheory: AI and the next generation

Contradictheory: AI and the next generation

The Star03-06-2025
Here's a conversation I don't think we'd have heard five years ago: 'You know what they do? They send in their part of the work, and it's so obviously ChatGPT. I had to rewrite the whole thing!'
This wasn't a chat I had with the COO of some major company but with a 12-year-old child. She was talking about a piece of group work they had to do for class. And this Boy as she called him (you could hear the capitalised italics in her voice) had waited until the last minute to submit his part.
To be honest, I shouldn't be surprised. These days, lots of people use AI in their work. It's normal. According to the 2024 Work Trend Index released by Microsoft and LinkedIn, 75% of employees then used artificial intelligence (AI) to save time and focus on their most important tasks.
But it's not without its problems. An adult using AI to help draft an email is one thing. A student handing in their weekly assignment is another. The adult uses AI to communicate more clearly, but there the student is taking a shortcut. So, in an effort to deliver better work, the child might actually be learning less.
And it's not going away. A 2024 study by Impact Research for the Walton Family Foundation found that 48% of students use ChatGPT at least weekly, representing a jump of 27 percentage points over 2023. And more students use AI chatbots to write essays and assignments (56%) than to study for tests and quizzes (52%).
So what about the other students that don't use AI, like the girl I quoted above? I find they often take a rather antagonistic view. Some kids I talk to (usually the ones already doing well in class) seem to look down on classmates who use AI and, in the process, they look down on AI to do their homework as well.
And I think that's wrong.
As soon as I learned about ChatGPT, I felt that the key to using AI tools well is obvious. It lies in its name: tools. Like a ruler for drawing straight lines, or a dictionary for looking up words, AI chatbots are tools, only more incredibly versatile ones.
One of the biggest problems, of course, is that AI chatbots don't always get their facts right (in AI parlance, they 'hallucinate'). So if you ask it for an essay on 'fastest marine mammal', there's a chance it'll include references to 'sailfish' and 'peregrine falcon'. In one test of AI chatbots, hallucination rates for newer AI systems were as high as 79%.
Even OpenAI, the company behind ChatGPT, isn't immune. Their o3 release hallucinated 33% of the time in their PersonQA benchmark test, which measures how well it answers questions about public figures. The new o4-mini performed even worse, hallucinating 48% of the time.
There are ways to work around this, but I think most people don't know them. For example, many chatbots now have a 'Deep Research' mode that actively searches the internet and presents answers along with sources.
The thing about this is that you, the reasonable, competent, and capable human being, can check the original source to see if it's something you trust. Instead of the machine telling you what it 'knows', it tells you what it found, and it's up to you to verify it.
Another method is to feed the chatbot the materials you want it to use, like a PDF of your textbook or a research paper. Google's NotebookLM is designed for this. It only works with the data you supply, drastically reducing hallucinations. You can then be more sure of the information it produces.
In one stroke, you've turned the chatbot into a hyper-intelligent search engine that not only finds what you're looking for but also understands context, identifies patterns, and helps organise the information.
That's just a small part of what AI can do. But even just helping students find and organise information better is a huge win. And ideally, teachers should lead the charge in classrooms, guiding students on how to work with AI responsibly and effectively.
Instead, many feel compelled to ban it or to try to 'AI-proof' assignments, for example, by demanding handwritten submissions or choosing topics that chatbots are more likely to hallucinate on.
But we can do better. We should allow AI in and teach students how to use it in a way that makes them better. For example, teachers could say that the 'slop' AI generates is the bare minimum. Hand it in as-is, and you'll scrape a C or D. But if you use it to refine your thoughts, to polish your voice, to spark better ideas, then that's where the value lies.
And students can use it to help them revise by getting it to generate quizzes to test themselves with (they, of course, have to verify the answers the AI gives are correct).
Nevertheless, what I've written about so far is about using AI as a tool. The future is about using it as a collaborator. Right now, according to the 2025 Microsoft Work Trend Index, while 50% see it as a command-based tool, 48% of Malaysian workers treat AI as a thought partner. The former issues basic instructions, while the latter has conversations and you have human-machine collaboration.
The report goes on to say explicitly that this kind of partnership is what all employees should strive for when working with AI. That means knowing how to iterate the output given, when to delegate, when to refine the results, and when to push back. In short: the same skills we want kids to learn anyway when working with classmates and teachers.
And the truth is that while I've used AI to find data, summarise reports, and – yes – to proofread this article, I haven't yet actively collaborated with AI. However, the future seems to be heading in that direction. Just a few weeks ago, I wrote about mathematician Terence Tao who predicts that it won't be long until computer proof assistants powered by AI may be cited as co-authors on mathematics papers.
Clearly, I still have a lot to learn about using AI day-to-day. And it's hard. It involves trial and error and wasted effort while battling with looming deadlines. I may deliver inferior work in the meantime that collaborators may have to rewrite. But I remain, as ever, optimistic. Because technology – whether as a tool or a slightly eccentric collaborator – has ultimately the potential to make us and our work better.
Logic is the antithesis of emotion but mathematician-turned-scriptwriter Dzof Azmi's theory is that people need both to make sense of life's vagaries and contradictions. Write to Dzof at lifestyle@thestar.com.my. The views expressed here are entirely the writer's own.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China's DeepSeek unveils faster AI model with domestic chip support
China's DeepSeek unveils faster AI model with domestic chip support

Malay Mail

time10 hours ago

  • Malay Mail

China's DeepSeek unveils faster AI model with domestic chip support

BEIJING, Aug 21 — Chinese artificial intelligence startup DeepSeek released on Thursday an upgrade to its flagship V3 model that the company says has a feature that can optimise it for Chinese-made chips, along with faster processing speeds. The focus on domestic chip compatibility may signal that DeepSeek's AI models are being positioned to work with China's emerging semiconductor ecosystem as Beijing pushes to replace US technology in the face of Washington's export restrictions. DeepSeek shook the technology world this year when it released AI models that compete with Western ones like OpenAI's ChatGPT while offering lower operational costs. The upgrade to DeepSeek's V3 model follows two other recent updates to its core models — an R1 model update in May and an earlier V3 enhancement in March. For domestic chip support, DeepSeek said in a WeChat post its DeepSeek-V3.1 model's UE8M0 FP8 precision format is optimised for 'soon-to-be-released next-generation domestic chips'. The company did not identify which specific chip models or manufacturers would be supported. FP8, or 8-bit floating point, is a data processing format that allows AI models to operate more efficiently, using less memory while running faster than traditional methods. The DeepSeek-V3.1 features a hybrid inference structure that enables the model to operate in both reasoning and non-reasoning modes, the company said in a WeChat post on Thursday. Users can toggle between these modes using a 'deep thinking' button on the company's official app and web platform, both of which now run the V3.1 version. The company will also adjust the costs for using the model's API, a platform that allows developers of other apps and web products to integrate its AI models, starting September 6, the statement showed. — Reuters

Malaysia enters global AI race with Ilmu, a chatbot to rival ChatGPT
Malaysia enters global AI race with Ilmu, a chatbot to rival ChatGPT

Sinar Daily

time15 hours ago

  • Sinar Daily

Malaysia enters global AI race with Ilmu, a chatbot to rival ChatGPT

SHAH ALAM – While we have become familiar with helpful artificial intelligence (AI) assistants like ChatGPT, DeepSeek and Gemini, Malaysia has now stepped into the AI landscape with its own homegrown solution: Ilmu, an intelligent chatbot tailored for Malaysians. Developed through a partnership between YTL AI Labs and Universiti Malaya, Ilmu (short for Intelek Luhur Malaysia Untukmu) marks a significant milestone as the nation's first locally-built, multimodal large language model. Created with a deep understanding of Malaysia's cultures and languages, it was benchmarked to rival world-class AI models like GPT-4o, DeepSeek-v3, and Llama 3.1. Its performance in Bahasa Melayu has set new standards, especially on MalayMMLU, an international benchmark for the Malay language, highlighting its fluency and contextual accuracy. Prime Minister Datuk Seri Anwar Ibrahim officially launched Ilmu during the inaugural Asean AI Summit in Kuala Lumpur. 'This launch also signifies that Malaysia is no longer merely a consumer of technology but has now proven capable of developing world-class innovations through the expertise of its own people. "Ilmu was designed to understand, respect, and celebrate the nation's diversity, while opening new avenues in education, public services, and industrial innovation,' he said at the summit, as posted on his official X account. Unlike its global counterparts, Ilmu's data and servers are hosted entirely within Malaysia, reinforcing digital sovereignty and localised security. Designed to surpass GPT-4 in Malay-language proficiency, Ilmu is already in use across education and multilingual citizen support services. According to a report by Awani, YTL AI Labs executive director Lou Yeoh stated that Ilmu symbolises a historic step for Malaysia. 'This is highly significant because we are now on par with the world's top models like GPT, Llama and DeepSeek. "However, Ilmu is uniquely Malaysian, owned by Malaysia, trained in our language, and embedded with our cultural context,' the article read. Set to be released on Sept 16, in conjunction with Malaysia Day, the first version of Ilmuchat will be available in web and mobile formats, supporting communication in Malay, English and Manglish. The initial rollout will focus on text-only interactions, with voice and image support to follow in later updates. The project has already attracted attention from both local and international sectors, with organisations in automotive, finance, media, healthcare, education, telecommunications and technology exploring or adopting its capabilities. In conjunction with the launch, three major initiatives were introduced to broaden Ilmu's accessibility and application. The first, Ilmu for Business, allows companies to integrate Ilmu's capabilities through API access, offering enterprise-grade AI tools tailored to local needs. The second initiative, the Ilmu AI Accelerator Programme, was launched in collaboration with MDEC and provides up to USD$1 million worth of free API tokens to qualifying startups, encouraging innovation and development within the AI ecosystem. Ilmuchat for Consumers, a smart chatbot described as 'Malaysia's ChatGPT,' will be made available for early access beginning Sept 16, in celebration of Malaysia Day. With Ilmu's introduction, Malaysia not only strengthens its standing in the global AI ecosystem but also ensures that future technological advancements will remain firmly rooted in national values and identity.

Microsoft restricts Chinese access to cyber early warning system
Microsoft restricts Chinese access to cyber early warning system

The Sun

time20 hours ago

  • The Sun

Microsoft restricts Chinese access to cyber early warning system

WASHINGTON: Microsoft said on Wednesday it has scaled back some Chinese companies' access to its early warning system for cybersecurity vulnerabilities following speculation that Beijing was involved in a hacking campaign against the company's widely used SharePoint servers. The new restrictions come in the wake of last month's sweeping hacking attempts against Microsoft SharePoint servers, at least some of which Microsoft and others have blamed on Beijing. That raised suspicions among several cybersecurity experts that there was a leak in the Microsoft Active Protections Program (MAPP), which Microsoft uses to help security vendors worldwide, including in China, to learn about cyber threats before the general public so they can better defend against hackers. Beijing has denied involvement in any SharePoint hacking. Microsoft notified members of the MAPP program of the SharePoint vulnerabilities on June 24, July 3 and July 7, Reuters has previously reported. Because Microsoft said it first observed exploitation attempts on July 7, the timing led some experts to allege that the likeliest scenario for the sudden explosion in hacking attempts was because a rogue member of the MAPP program misused the information. In a statement, Microsoft said several Chinese firms would no longer receive 'proof of concept code,' which mimics the operation of genuine malicious software. Proof of concept code can help cybersecurity professionals seeking to harden their systems in a hurry, but it can also be repurposed by hackers to get a jump start on the defenders. Microsoft said it was aware that the information it provided its partners could be exploited, 'which is why we take steps – both known and confidential – to prevent misuse. We continuously review participants and suspend or remove them if we find they violated their contract with us which includes a prohibition on participating in offensive attacks.' Microsoft declined to disclose the status of its investigation of the hacking or go into specifics about which companies had been restricted. - Reuters

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store