logo
Truth in the age of AI

Truth in the age of AI

Straits Times27-07-2025
AI is causing seismic changes in how we understand what is true and what is not. It can have serious implications for important events such as elections.
In today's world, artificial intelligence (AI) has transformed the way we live, work and play. Algorithms power our social media feeds, and bots can make our work more efficient.
AI is the ability of machines to think and act like humans by learning, solving problems, and making decisions.
With its ability to process and analyse vast amounts of data in seconds, AI has become a powerful tool in sectors like healthcare, finance and banking, manufacturing and supply chains.
But as AI proliferates, it is also silently causing seismic changes in how we understand what is true and what is not.
The digital world is seeing an explosion of synthetic content that muddies the line between truth and fiction, which can have serious implications for important events such as elections.
Deepfakes – hyper-realistic videos created using deep learning – are perhaps the most high-profile example of this.
A 2022 deepfake video of Ukrainian President Volodymyr Zelensky urging his troops to surrender during the Russia-Ukraine war was widely circulated before being debunked. The minute-long video briefly sowed confusion and panic.
Top stories
Swipe. Select. Stay informed.
Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide
Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts
Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole
Asia Singapore-only car washes will get business licences revoked, says Johor govt
World Food airdropped into Gaza as Israel opens aid routes
Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close
Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE
Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021
In 2024 during India's general election,
political parties 'resurrected' deceased leaders and used deepfake avatars to influence voters . For instance, the former Tamil Nadu chief minister M. Karunanidhi, who died in 2018, appeared in AI-generated videos endorsing his son's political run.
In Britain, more than 100 deepfake videos featuring then British Prime Minister Rishi Sunak ran as ads on Facebook before the 2024 election. The ads appeared to be viewed by 400,000 in a month, and payments for the ads originated overseas.
When voters see such manipulated videos making controversial or false statements, it can damage reputations or sway opinions – even after the deepfake is debunked.
The threat is not just about altering individual votes – it is about eroding trust in the electoral process altogether. When voters begin to doubt everything they see or hear, apathy and cynicism can take hold, weakening democratic institutions.
With its ability to blur the distinction between what is real or not, AI's impact on truth is more insidious than being able to tell black from white, fact from fiction.
NewsGuard, a media literacy tool that rates the reliability of online sources, found that by May 2025, more than 1,200 AI-generated news and information sites were operating with little to no human oversight, a number that had increased by more than 20 times in two years. Many of these websites even appeared to be credible.
Reliable media organisations have also come under fire for using AI-generated news summaries that are sometimes inaccurate. Apple faced calls earlier in 2025 to remove its AI-generated news alerts on iPhones that were in some instances completely false and 'hallucinated'.
In its Global Risks Report 2024, the World Economic Forum said: 'Emerging as the most severe global risk anticipated over the next two years, foreign and domestic actors alike will leverage misinformation and disinformation to further widen societal and political divides.'
AI will serve only to amplify those divides through its widespread use by bad actors to spread misinformation that appears to be credible, using algorithms that emphasise engagement, even to those adept at navigating news sites.
He heard what sounded like his son crying and fell for the scam
Beyond elections and political influence, AI is also being used by scammers to target individuals.
Voice cloning technology is increasingly being deployed by fraudsters in impersonation scams. With just a short sample of someone's voice – easily sourced from a TikTok video, a podcast clip, or even a voicemail – AI tools can convincingly replicate it.
In India, Mr Himanshu Shekhar Singh fell prey to an elaborate scheme after receiving a phone call from a purported police officer, who claimed that his 18-year-old son had been caught with a gang of rapists and needed 30,000 rupees (S$444) before his name could be cleared.
He heard what sounded like his son crying over the phone, and made an initial payment of 10,000 rupees, only to find out that his son was unharmed, and he had been duped.
In Hong Kong, the police said that an unnamed multinational company was scammed of HK$200 million (S$32.6 million) after an employee attended a video conference call with deepfake recreations of the company's Britain-based chief financial officer and other employees. The employee was duped into making the transfers following instructions from the scammers.
Scammers are also using generative AI to produce phishing e-mails and scam messages that are far more convincing than traditional spam, which is more likely to contain incorrect grammar and suspicious-looking links.
Cyber-security firm Barracuda, together with researchers from Columbia University and the University of Chicago, found in a study published on June 18 that 51 per cent of malicious and spam e-mails are now generated using AI tools.
The research team examined a dataset of spam e-mails flagged by Barracuda between February 2022 and April 2025. Using trained detection tools, they assessed whether each malicious or unwanted message had been produced by AI.
Their analysis revealed a consistent increase in the share of AI-generated spam e-mails starting from November 2022 and continuing until early 2024. Notably, November 2022 marked the public release of ChatGPT.
Can AI be a force for good?
But just as AI is being used to deceive, it is also being used to defend the truth.
For example, newsrooms around the world are increasingly turning to AI to enhance their fact-checking capabilities and stay ahead of misinformation.
Reuters, for example, has developed News Tracer, a tool powered by machine learning and natural language processing that monitors social media platforms like X to detect and assess the credibility of breaking news stories in real time. It assigns credibility scores to emerging narratives, helping journalists filter out false leads quickly.
Meanwhile, major news organisations like the BBC and The New York Times have collaborated with partners like Microsoft and Media City Bergen under an initiative called Project Origin to use AI to track the provenance of digital content and verify its authenticity.
Tech companies are also contributing to efforts to combat the rise of misinformation.
Google's Jigsaw unit has developed tools such as 'About this image', which helps users trace an image's origin, and detect whether it was AI-generated or manipulated.
Microsoft has also contributed to the fight against deception with its Video Authenticator tool, which detects deepfakes by identifying giveaway signs invisible to the human eye that an image has been artificially generated.
For example, in a video where someone's face has been mapped on another person's body, this includes subtle fading or greyscale pixels at the boundary of where the images have been merged.
Social media companies are slowly stepping up too.
Meta has introduced labels for AI-generated political ads, and YouTube has rolled out a new tool that requires creators to disclose to viewers when realistic content is made with altered or synthetic media.
The rise of AI has undeniably made it harder to distinguish fact from fiction, but it has also opened new frontiers for safeguarding the truth.
Legislation can establish protective guard rails
Whether AI becomes a conduit for clarity or confusion will also be shaped by the guard rails and regulations that governments and societies put in place.
To that end, the European Union is a front runner in AI regulation. The EU Artificial Intelligence Act was first proposed in 2021, and approved in August 2024.
The legislation classifies AI by risk and places strict rules on systems that affect public rights and democracy.
For example, AI such as social scoring systems and manipulative AI is prohibited because of its unacceptable risk. High-risk systems include those that profile individuals to assess their work performance or economic situation, for example.
High-risk AI providers need to establish a risk management system and conduct data governance to ensure that testing data sets are relevant and free of errors as much as possible.
This helps to address risks that AI poses to truth, especially around misinformation and algorithmic manipulation.
Countries such as Singapore, Canada, and Britain have also published governance frameworks or set up regulatory sandboxes to guide ethical AI use.
Societies must be equipped to navigate the AI era.
Public education on how deepfakes, bot-generated content, and algorithms can skew perception would be essential. When citizens understand how AI-generated misinformation works, they are less likely to be misled.
In the EU, media literacy is a core pillar of the Digital Services Act, which requires major online platforms to support educational campaigns that help users recognise disinformation and manipulative content.
Finland has integrated AI literacy into its 2025 school curriculum from early childhood to vocational training. The aim is to prepare students for a future where AI is increasingly prevalent and to help them build critical thinking skills and expose them to ethical considerations around AI.
But mitigating the impact of AI is not just the job of governments and tech companies – individuals can also take steps to protect themselves from deception.
Take care to verify the source of information, especially when it comes through social media. Be wary of sensational photos or videos and the likelihood that the content could have been manipulated. When in doubt, consult trusted news sources or channels.
Individuals themselves can also play their part by using AI responsibly – such as avoiding the sharing of unverified content generated by chatbots or image tools.
By staying cautious and curious, people can push back against AI-powered misinformation and create a safer digital space.
How Singapore tackles AI risks
Singapore was among the first few countries to introduce a national AI strategy in 2019, with projects in areas like border clearance operations and chronic disease prediction. But with the rapid development of generative AI that saw the public roll-out of large language models like ChatGPT, the nation updated its strategy in 2023.
The National AI Strategy 2.0 focuses on nurturing talent, promoting a thriving AI industry and sustaining it with world-leading infrastructure and research that ensures AI serves the public good.
To nurture talent here, Singapore aims to triple its number of AI practitioners to 15,000 by training locals and hiring from overseas.
While the nation is eager to harness the benefits of AI to boost its digital economy, it is also wary of the manipulation, misinformation, and ethical risks involved with the technology.
To mitigate such risks, the country launched the first edition of the Model AI Governance Framework in January 2019. The voluntary framework is a guide for private sector organisations to address key ethical and governance issues when deploying traditional AI.
The framework explains how AI systems work, and how to build good data accountability practices, and create open and transparent communication.
The framework was updated in 2020 and then again in May 2024, when the Model AI Governance Framework for Generative AI was rolled out, building on the initial frameworks to take into account new risks posed by generative AI. This includes things like hallucinations, where an AI model generates information that is incorrect or not based in reality; and concerns around copyright infringement.
To combat such challenges, the framework encourages industry players to offer transparency around the safety and hygiene measures taken when developing the AI tool. This can include bias correction techniques, for instance.
The framework also touches on the need for transparency around how AI-generated content is created to enable users to consume content in an informed manner, and how companies and communities should come together on digital literacy initiatives.
In the country's recent general election held in May 2025,
a new law banning fake or digitally altered online material that misrepresents candidates during the election period was put in place for the first time.
In passing the
Elections (Integrity of Online Advertising) (Amendment) Bill in October 2024, Minister for Digital Development and Information Josephine Teo said that it does not matter if the content is favourable or unfavourable to any candidate.
The publication of misinformation generated using AI during the election, and the boosting, sharing and reposting of existing content, was made an offence.
While it was not used during the recent general election, the legal instrument provides a lever to ensure electoral integrity in Singapore.
Overall, Singapore is eager to use AI as a driver of growth. In regulating the technology, it prefers an incremental approach, developing and updating voluntary governance frameworks, and drawing sector-specific guidelines instead of an overall mandate.
But where there is a risk of AI being used to misinform and manipulate the public, it will not hesitate to pass laws against this happening, as it did ahead of the 2025 General Election.
Singapore's governance approach combines strong ethical foundations, industry collaboration, and global engagement to ensure AI is used safely and fairly.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Zelenskiy says Russia seems more inclined now to a ceasefire, World News
Zelenskiy says Russia seems more inclined now to a ceasefire, World News

AsiaOne

time19 minutes ago

  • AsiaOne

Zelenskiy says Russia seems more inclined now to a ceasefire, World News

KYIV — Ukrainian President Volodymyr Zelenskiy said on Wednesday (Aug 6) that Russia seemed "more inclined" to a ceasefire, but details of a potential deal are of great significance and neither Ukraine nor the US should be deceived by Moscow. President Donald Trump said his special envoy Steve Witkoff's meeting with Russian leader Vladimir Putin on Wednesday delivered "great progress," but Trump gave no specifics. Following the meeting, Zelenskiy had a call with Trump, joined by European allies. "Ukraine will definitely defend its independence. We all need a lasting and reliable peace. Russia must end the war that it itself started," Zelenskiy said on X. Trump, who has signalled frustration with Putin in recent weeks and has given the Russian president until Friday to make peace with Ukraine or face tougher sanctions, hailed Witkoff's visit as highly productive. But a White House official said the secondary sanctions that Trump has threatened against countries doing business with Russia were still expected to be implemented on Friday. An executive order introducing additional 25 per cent tariffs on India for Russian oil imports was signed on Wednesday. "The pressure on (Russia) works. But the main thing is that they do not deceive us in the details — neither us nor the US," Zelenskiy said. Ukraine has repeatedly called for an immediate and unconditional ceasefire. Russia, which now controls about a fifth of Ukrainian territory and proceeds with its advances on the eastern front, rejected the idea. National security advisers from Ukraine and allied nations were to meet soon to work out a "joint stance", Zelenskiy added. [[nid:720970]]

CNA938 Rewind - Near 6-fold rise in prepayment losses for beauty consumers in first half of 2025
CNA938 Rewind - Near 6-fold rise in prepayment losses for beauty consumers in first half of 2025

CNA

timean hour ago

  • CNA

CNA938 Rewind - Near 6-fold rise in prepayment losses for beauty consumers in first half of 2025

Beauty industry consumers lost over $108,000 in the first half of this year after paying in advance for services that were never delivered. This represents a 464% increase – or a near six-fold rise – from $19,000 during the same period last year. Hairianto Diman and Susan Ng find out why consumers are still falling victim to prepayment losses and what more can be done to protect them, with Mr Linus Ng, Member, CASE Central Committee and Chairperson, CASE Consumer Education Committee.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store