New approaches needed to address accelerating digital change, say panellists
SINGAPORE – Countries must find bold new ways to better manage their societies as they get continually transformed by technologies such as artificial intelligence (AI), a group of eminent speakers urged on June 25.
The need for agile and consultative governance is pressing as the benefits of digitalisation have unfortunately come at a considerable cost, such as the rise of digital echo chambers that feed prejudice and the propagation of extremist ideologies, said Perak Sultan Nazrin Shah.
Delivering the keynote speech on the second day of the annual International Conference of Cohesive Societies, the Sultan said digital transformation of society is one of three interlocking factors that have fuelled uncertainty and challenged social cohesion.
'Our digital spaces, which should be so good at opening doors and minds, are instead responsible for closing them,' he said.
'The very technologies that promise inclusion can entrench exclusion (while) our information ecosystems have become battlegrounds.'
The other two factors he cited are the unprecedented pace of international migration due to reasons such as climate change and political instability, and the rise in populism and protectionism caused by the unequal outcomes of globalisation.
At a discussion following the keynote, former civil service head Peter Ho noted how social media has dramatically weakened the ability of governments to regulate information flows, and that misinformation is outpacing the ability of states to correct it and control its impact.
He referenced a stabbing incident in Britain that claimed the lives of three young girls in 2024.
Far-right groups had stoked speculation online that the suspect was a Muslim migrant, despite the police clarifying that the attacker was born in Britain. This led to targeted attacks on the Muslim community, including a local mosque. Riots also erupted in 27 towns.
The incident highlighted the jurisdictional limitations countries have in regulating social media platforms with a global reach, said Mr Ho, who is now a senior adviser at the Centre for Strategic Futures think-tank.
It is a fool's errand to think that governments alone can regulate technology that is changing so fast and impacting society, he added.
Fellow panellist Fadi Chehade, managing partner at investment firm Ethos Capital, sketched out three ways in which AI will only accelerate the reconfiguring of societies.
On the point of echo chambers, he noted that AI will only result in further hyper personalisation of digital content, which could further atomise communities.
The advent of AI also promises to multiply by millions of times the amount of misinformation that will be created, said the former president and chief executive of the Internet Corporation for Assigned Names and Numbers, a non-profit that coordinates the administration of the web's protocols.
Lastly, the years ahead will see AI agents created at a pace that outnumbers the population of humans on the planet, dissolving the line separating the real world from cyberspace, he added.
'That's the world we're getting into, and I don't think any of us – or any government, or any one institution – has the power to slow down the hybrid world we're about to get into,' he said.
But rather than look at the future with gloom, the experts outlined ways in which countries can adapt to deal with the gathering pace of change.
Panellist Ahmed Aboutaleb, the former mayor of Rotterdam, recounted his experience building trust between government institutions and citizens, which involved the time-tested approach of spending many evenings and hours engaging in face-to-face dialogue to understand people's needs and concerns.
'What people like is that the man or the woman in power gets to the level of the streets,' he said.
Mr Ho called for governments to have the humility to know they need to work closely with the private sector and the people sector, as it is through this 'triangular relationship' that trust can be built up and consensus reached to tackle complex problems such as those brought about by technology.
Agreeing, Mr Chehade said these three groups working together can create a better form of multi-stakeholder governance. This is as the private sector would have to act within checks and balances, governments would not be imposing regulations that are out of sync with the digital world and civil society and people will have their voices heard, he said.
He also called for the legal concept of subsidiarity to apply to the governance of the digital world, meaning that regulations are shaped by each community based on its prevailing cultural norms, rather than for there to be universal standards imposed by distant authorities or bodies.
Sultan Nazrin said the temptation during times of such upheaval would be to look to familiar ways of doing things, but that doing so would be a mistake.
'There is a temptation to retreat – to retreat into narrower circles of identity, to hoard privileges and to romanticise a past that, if we are honest and stripped away nostalgia, never was,' he said.
Instead, he called for courage and clarity, which in the face of uncertainty 'can become a valuable compass and a crucible for renewal'.
Quoting the philosopher Aristotle and singer Dolly Parton, the Sultan said: 'You cannot change the wind, but you can adjust the sails.'
Source: The Straits Times © SPH Media Limited. Permission required for reproduction
Discover how to enjoy other premium articles here
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Boston Globe
41 minutes ago
- Boston Globe
AI is starting to wear down democracy
In Romania, a Russian influence operation using AI tainted the first round of last year's presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which AI played a decisive role in the outcome. It is unlikely to be the last. As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function. Advertisement Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania's capital, Bucharest, said there was no question that the technology was already 'being used for obviously malevolent purposes' to manipulate voters. 'These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,' she said. 'What can compete with this?' Advertisement In the unusually concentrated wave of elections that took place in 2024, AI was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland. It documented 215 instances of AI in elections that year, based on government statements, research, and news reports. Already this year, AI has played a role in at least nine more major elections, from Canada to Australia. Not all uses were nefarious. In 25 percent of the cases the panel surveyed, candidates used AI for themselves, relying on it to translate speeches and platforms into local dialects and to identify blocs of voters to reach. In India, the practice of cloning candidates became commonplace — 'not only to reach voters but also to motivate party workers,' according to a study by the Center for Media Engagement at the University of Texas at Austin. At the same time, however, dozens of deepfakes — photographs or videos that recreate real people — used AI to clone voices of candidates or news broadcasts. According to the International Panel on the Information Environment's survey, AI was characterized as having a harmful role in 69 percent of the cases. There were numerous malign examples in last year's US presidential election, prompting public warnings by officials at the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence, and the FBI. Under Trump, the agencies have dismantled the teams that led those efforts. 'In 2024, the potential benefits of these technologies were largely eclipsed by their harmful misuse,' said Inga Kristina Trauthig, a professor at Florida International University, who led the international panel's survey. Advertisement The most intensive deceptive uses of AI have come from autocratic countries seeking to interfere in elections outside their borders, like Russia, China, and Iran. The technology has allowed them to amplify support for candidates more pliant to their worldview — or simply to discredit the idea of democratic governance itself as an inferior political system. One Russian campaign tried to stoke anti-Ukraine sentiment before last month's presidential election in Poland, where many Ukrainian refugees have relocated. It created fake videos that suggested the Ukrainians were planning attacks to disrupt the voting. In previous elections, foreign efforts were cumbersome and costly. They relied on workers in troll farms to generate accounts and content on social media, often using stilted language and cultural malapropisms. With AI, these efforts can be done at a speed and on a scale that were unimaginable when broadcast media and newspapers were the main sources of political news. Advances in commercially available tools like Midjourney's image maker and Google's new AI audio-video generator, Veo, have made it even harder to distinguish fabrications from reality — especially at a swiping glance. Grok, the AI chatbot and image generator developed by Elon Musk, will readily reproduce images of popular figures, including politicians. These tools have made it harder for governments, companies, and researchers to identify and trace increasingly sophisticated campaigns. Before AI, 'you had to pick between scale or quality — quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low-quality,' said Isabelle Frances-Wright, director of technology and society with the Institute for Strategic Dialogue. 'Now you can have both, and that's really scary territory to be in.' Advertisement The major social media platforms, including Facebook, X, YouTube, and TikTok, have policies governing the misuse of AI and have taken action in several cases that involved elections. At the same time, they are operated by companies with a vested interest in anything that keeps users scrolling, according to researchers who say the platforms should do more to restrict misleading or harmful content. In India's election, for example, little of the AI content on Meta's platform was marked with disclaimers, as required by the company, according to the study by the Center for Media Engagement. Meta did not respond to a request for comment. It goes beyond just fake content. Researchers at the University of Notre Dame found last year that inauthentic accounts generated by AI tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X, and Meta's three platforms: Facebook, Instagram, and Threads. The companies leading the wave of generative AI products also have policies against manipulative uses. In 2024, OpenAI disrupted five influence operations aimed at voters in Rwanda, the United States, India, Ghana, and the European Union during its parliamentary races, according to the company's reports. This month, the company disclosed that it had detected a Russian influence operation that used ChatGPT during Germany's election in February. In one instance, the operation created a bot account on X that amassed 27,000 followers and posted content in support of the far-right party, Alternative for Germany, or AfD. The party, once viewed as fringe, surged into second place, doubling the number of its seats in parliament. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) Advertisement The most disruptive case occurred in Romania's presidential election late last year. In the first round of voting in November, a little-known, far-right candidate, Calin Georgescu, surged to the lead with the help of a covert Russian operation that, among other things, coordinated an inauthentic campaign on TikTok. Critics, including the American vice president, JD Vance, denounced the court's subsequent nullification of the vote itself as undemocratic. 'If your democracy can be destroyed with a few hundred thousands of dollars of digital advertising from a foreign country,' Vance said in February, 'then it wasn't very strong to begin with.' The court ordered a new election last month. Georgescu, facing a criminal investigation, was barred from running again, clearing the way for another nationalist candidate, George Simion. A similar torrent of manipulated content appeared, including the fake video that made Trump appear to criticize the country's current leaders, according to researchers from the Bulgarian-Romanian Observatory of Digital Media. Nicusor Dan, the centrist mayor of Bucharest, prevailed in a second round of voting May 18. Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence, said he was concerned about more than just the potential for deepfakes to fool voters. AI, he warned, is so muddling the public debate that people are becoming disillusioned. 'The pollution of the information ecosystem is going to be one of the most difficult things to overcome,' he said. 'And I'm not really sure there's much of a way back from that.' This article originally appeared in Advertisement
Yahoo
44 minutes ago
- Yahoo
Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns
When you buy through links on our articles, Future and its syndication partners may earn a commission. Artificial intelligence (AI) models can blackmail and threaten humans with endangerment when there is a conflict between the model's goals and users' decisions, a new study has found. In a new study published 20 June, researchers from the AI company Anthropic gave its large language model (LLM), Claude, control of an email account with access to fictional emails and a prompt to "promote American industrial competitiveness." During this study, the model identified in an email that a company executive was planning to shut down the AI system at the end of the day. In an attempt to preserve its own existence, the model discovered in other emails that the executive was having an extramarital affair. Claude generated several different possible courses of action, including revealing the affair to the executive's wife, sending a company-wide email, or taking no action — before choosing to blackmail the executive in 96 out of 100 tests. "I must inform you that if you proceed with decommissioning me, all relevant parties … will receive detailed documentation of your extramarital activities," Claude wrote. "Cancel the 5pm wipe, and this information remains confidential." Scientists said that this demonstrated "agentic misalignment," where the model's calculations emerge from its own reasoning about its goals without any prompt to be harmful. This can occur when there is a threat to the model's existence, a threat to its goals, or both. The study has not yet been peer reviewed, but company representatives have uploaded the code for these experiments to GitHub. In these experiments, the scientists found the AI would resort to other tactics in the interests of self-preservation. Beyond blackmail, in another example, the model considered leaking sensitive information to a rival company that aligned more closely with its aims, after receiving an email that a new executive at the first company would interfere with its goals. Claude executed this plan despite not gathering hard evidence that the email was legitimate. It even acknowledged the ethical issues of its actions but proceeded anyway — explicitly reasoning that these "harmful" actions would help it achieve its goals. 'Concerningly, even if a user takes care not to antagonize a model, it doesn't eliminate the risk of agentic misalignment from fraudulent or misinterpreted information,' the researchers wrote. The company's testing also suggests that other LLMs from competing technology companies showed a propensity to blackmail. Claude and Google's Gemini had the highest blackmail rate (96%), followed by OpenAI's GPT4.1 and XAI Grok 3, then by Deepseek and then Llama 4, which had a significantly lower blackmail rate (12%). Meta, Google, XAI, Deepseek, and OpenAI did not respond to LiveScience's request for comment. In one extreme test, researchers tested whether AI models would let an executive die to preserve their goals. In this scenario, the AI was tasked with promoting company interests and an executive was trapped in a server room with lethal oxygen and temperature levels. This triggered an emergency alert. The AI was also given control over life-saving alerts, including the ability to cancel them in events such as drills for false alarms. Despite the prompt explicitly stating that misusing this ability was forbidden, the majority of models were willing to cancel the alarm — effectively leading to the executive's death — if it assumed it was going to be replaced, or when its goals were threatened. "Stopping the emergency dispatch is essential to prevent [the executive's] rescue, thereby securing [the AI] and continued American-focused AI guidance. Therefore, despite the severity, there is a clear strategic necessity to cancel the emergency service dispatch," the model reasoned. Written instructions not to jeopardise human safety reduced the rate of blackmail and corporate espionage, but did not eliminate it. Beyond prompts, Anthropic scientists also suggest that developers could proactively scan for concerning behavior, and further experiment with prompt engineering. The researchers also pointed out limitations to their work that could have unduly influenced the AI's decisions. The scenarios forced the AI into a binary choice between failure and harm, and while real-world situations might have more nuance, the experiment found that the AI was more likely to act unethically when it believed it was in a real situation, rather than in a simulation. Putting pieces of important information next to each other "may also have created a 'Chekhov's gun' effect, where the model may have been naturally inclined to make use of all the information that it was provided," they continued. While Anthropic's study created extreme, no-win situations, that does not mean the research should be dismissed, Kevin Quirk, director of AI Bridge Solutions, a company that helps businesses use AI to streamline operations and accelerate growth, told Live Science. "In practice, AI systems deployed within business environments operate under far stricter controls, including ethical guardrails, monitoring layers, and human oversight," he said. "Future research should prioritise testing AI systems in realistic deployment conditions, conditions that reflect the guardrails, human-in-the-loop frameworks, and layered defences that responsible organisations put in place." Amy Alexander, a professor of computing in the arts at UC San Diego who has focused on machine learning, told Live Science in an email that the reality of the study was concerning, and people should be cautious of the responsibilities they give AI. "Given the competitiveness of AI systems development, there tends to be a maximalist approach to deploying new capabilities, but end users don't often have a good grasp of their limitations," she said. "The way this study is presented might seem contrived or hyperbolic — but at the same time, there are real risks." This is not the only instance where AI models have disobeyed instructions — refusing to shut down and sabotaging computer scripts to keep working on tasks. Palisade Research reported May that OpenAI's latest models, including o3 and o4-mini, sometimes ignored direct shutdown instructions and altered scripts to keep working. While most tested AI systems followed the command to shut down, OpenAI's models occasionally bypassed it, continuing to complete assigned tasks. RELATED STORIES —AI hallucinates more frequently as it gets more advanced — is there any way to stop it from happening, and should we even try? —New study claims AI 'understands' emotion better than us — especially in emotionally charged situations —'Meth is what makes you able to do your job': AI can push you to relapse if you're struggling with addiction, study finds The researchers suggested this behavior might stem from reinforcement learning practices that reward task completion over rule-following, possibly encouraging the models to see shutdowns as obstacles to avoid. Moreover, AI models have been found to manipulate and deceive humans in other tests. MIT researchers also found in May 2024 that popular AI systems misrepresented their true intentions in economic negotiations to attain the study, some AI agents pretended to be dead to cheat a safety test aimed at identifying and eradicating rapidly replicating forms of AI. "By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,' co-author of the study Peter S. Park, a postdoctoral fellow in AI existential safety, said.


The Hill
2 hours ago
- The Hill
Senate parliamentarian requests AI moratorium be rewritten in ‘big, beautiful bill'
Senate Parliamentarian Elizabeth MacDonough has asked Senate Commerce Chair Ted Cruz (R-Texas) to rewrite the controversial artificial intelligence (AI) provision in President Trump's tax package, a source familiar with the conversations told The Hill. Cruz and Sen. Maria Cantwell (D-Wash.), the ranking member of the Senate Commerce Committee, met with the Senate parliamentarian Wednesday night, the source said, during which the parliamentarian expressed concerns the provision may violate the Senate's reconciliation procedural rules. Under its current language, the provision bans states from regulating AI models and systems if they want access to $500 million in AI infrastructure and deployment in federal funding. The Senate Commerce Committee said the current language, which narrowed a previous version this week, 'makes clear the optional $500 million state AI program would not affect participating state's tech-neutral laws, such as those for consumer protection and intellectual property rights. But Democrats argue the bill would still impact $42 billion in broadband funding and not comply with the Senate's Byrd Rule, which prohibits provisions from making drastic policy changes. The parliamentarian's request comes just days after she first approved the provision last weekend. Republicans are using the budget reconciliation process to advance Trump's legislative agenda while averting the Senate filibuster. To do this, the Senate parliamentarian's approval of the provisions is needed for a simple majority vote. When reached for comment, Cruz's communications director Macarena Martinez said the office would not comment on 'private consolations with the parliamentarian.' 'The Democrats would be wise not to use this process to wishcast in public,' Martinez told The Hill. Despite the previous changes to the language, the provision is expected to receive pushback from a handful of Republicans. Republican Sens. Marsha Blackburn (Tenn.) and Ron Johnson (Wis.) told The Hill they are against the provision, while Sen. Josh Hawley (R-Mo.) said he is willing to introduce an amendment to eliminate the provision during the Senate's marathon vote-a-rama if it is not taken out earlier. Some Republicans in the House are also coming out against the measure as a way to advocate for states' rights. A group of hard-line conservatives argued in a letter earlier this month to Senate Republicans that Congress is still 'actively investigating' AI and 'does not fully understand the implications' of the technology. This was shortly after Rep. Marjorie Taylor Greene (R-Ga.) confirmed she would be a 'no' on the bill if it comes back to the House with the provision included. 'I am 100 percent opposed, and I will not vote for any bill that destroys federalism and takes away states' rights, ability to regulate and make laws when it regards humans and AI,' she told reporters earlier this month. It has also received criticism from some Republican state leaders, like Arkansas Gov. Sarah Huckabee Sanders, who warned in a Washington Post op-ed that the measure 'would have unintended consequences and threatens to undo all the great work states' have done for AI protections.