logo
#

Latest news with #digitalmanipulation

China Has a Potent New Influence Tool: A.I.-Driven Propaganda
China Has a Potent New Influence Tool: A.I.-Driven Propaganda

New York Times

time05-08-2025

  • Business
  • New York Times

China Has a Potent New Influence Tool: A.I.-Driven Propaganda

Russia's efforts to interfere in the 2016 and 2020 U.S. presidential elections were pretty low-tech. Relying on generic bot messaging, low-quality content and mass targeting, their operations probably had limited impact. Those days are over. With the exponential rise of generative A.I. systems, the greatest danger is no longer a flood of invective and falsehoods on social media. Rather, it is the slow, subtle and corrosive manipulation of online communication — propaganda designed not to shock, but to slip silently into our everyday digital discussions. We have entered a new era in international influence operations, where A.I.-generated narratives shift the political landscape without drawing attention. A Chinese company called GoLaxy is already undertaking such operations, according to a large cache of documents recently uncovered by the Vanderbilt University Institute of National Security, where we work. The materials show GoLaxy emerging as a leader in technologically advanced, state-aligned influence campaigns, which deploy humanlike bot networks and psychological profiling to target individuals. Its activities and claims suggest it has connections to the Chinese government. GoLaxy has already deployed its technology in Hong Kong and Taiwan, and the documents suggest it may be preparing to expand into the United States. A.I.-driven propaganda is no longer a hypothetical future threat. It is operational, sophisticated and already reshaping how public opinion can be manipulated on a large scale. A representative of GoLaxy said the company focused on services for business intelligence and denied it had developed a bot network or psychological profiling tools targeting individuals. The company also denied being under the authority of any government agency or organization. What sets GoLaxy apart is its integration of generative A.I. with enormous troves of personal data. Its systems continually mine social media platforms to build dynamic psychological profiles. Its content is customized to a person's values, beliefs, emotional tendencies and vulnerabilities. According to the documents, A.I. personas can then engage users in what appears to be a conversation — content that feels authentic, adapts in real-time and avoids detection. The result is a highly efficient propaganda engine that's designed to be nearly indistinguishable from legitimate online interaction, delivered instantaneously at a scale never before achieved. While the documents offered no specific examples of these conversations, they describe how the technology develops personalized content. By extracting user data and studying broader patterns, A.I. can build synthetic messaging designed to appeal to a wide spectrum of the public. It can adapt to a user's tone, values, habits and interests, according to the documents. Then it can mimic real users by liking posts, leaving comments and pushing targeted content. According to the documents we uncovered, GoLaxy used its technology to minimize opposition to a 2020 national security law that cracked down on political dissent, identifying thousands of participants and thought leaders from 180,000 Hong Kong Twitter accounts. Then GoLaxy went after what it perceived as lies and misconceptions, 'correcting' the sources via its army of fake profiles. The company struck again in the lead-up to the 2024 Taiwanese election, when China-aligned groups peddled false claims of corruption and posted deepfakes on social media. During the campaign, GoLaxy suggested ways to undermine Taiwan's Democratic Progressive Party, which opposes China's claims over the island. The company gathered and most likely supplied information on trends in Taiwanese political debate and recommended the deployment of bot networks to exploit political divisions between its parties. GoLaxy had already amassed an abundance of data on Taiwan to support such intrusions, according to the documents, including organizational maps of government institutions — down to their political tendencies, attitude toward China and GPS coordinates — and profiles of over 5,000 accounts belonging to Taiwanese people. In a written statement, GoLaxy denied providing technical support for activities in Hong Kong and Taiwan. So far, GoLaxy's active deployments appear to have been confined to the Indo-Pacific. Evidence in the documents suggests that the company is positioning itself for expanded operations, including in the United States. GoLaxy has assembled data profiles of at least 117 members of the U.S. Congress and over 2,000 American political figures and thought leaders. Assuming GoLaxy continues to build American dossiers, it is possible the company will bring its operations across the Pacific. It said it has not collected data targeting U.S. officials GoLaxy operates in close alignment with China's national security priorities, although no formal government control has been publicly confirmed. The company was founded in 2010 by a research institute at the state-controlled Chinese Academy of Sciences, and has been chaired by a deputy director from the same institute. Since then, GoLaxy has, according to the documents, worked with top-level intelligence, party and military bodies, suggesting integration with China's political system. GoLaxy's strategic alignment became clearer in 2021, when it received funding from Sugon, a Beijing-based supercomputing company flagged by the Pentagon as a Chinese military affiliate. GoLaxy's public-facing A.I. platform coordinates with Sugon's supercomputers and DeepSeek-R1, one of China's leading A.I. models. These connections are a reminder that influence operations are no longer a sideshow — they are becoming core instruments of statecraft. Battlefields include not only geographic territory with troops and ships but also the online platforms we use every day. The strategy deployed by GoLaxy and others weaponizes the openness that underpins democratic societies. Debate, transparency and pluralism — hallmarks of democratic strength — are also points of vulnerability. Technological tools like GoLaxy's exploit these qualities. The line between surveillance and persuasion is disappearing, fast. The danger lies in the stealth and scale of these methods, and the speed with which they are improving. A.I.-generated content can be deployed quietly across entire populations with minimal resistance. It operates continually, shaping opinion and corroding democratic institutions beneath the surface. Imagine today's most effective social media platforms, but on a far greater scale, using a far more comprehensive model of its targets and synthetic propaganda that is even more compelling and difficult to resist. To counter the growing threat of A.I.-driven foreign influence operations, a coordinated response is essential. Academic researchers must work urgently to map how artificial intelligence, open-source intelligence and online influence campaigns converge to serve hostile state objectives. The U.S. government must take the lead in disrupting the infrastructure behind these operations, with the Defense Department targeting foreign influence networks and the Federal Bureau of Investigation working closely with digital platforms to identify and counter false personas. The private sector needs to accelerate A.I. detection capabilities to bolster our ability to detect synthetic content. If we can't identify it, we can't stop it. We are entering a new era of gray-zone conflict — one marked by information warfare executed at a scale, speed and degree of sophistication never seen before. If we don't quickly figure out how to defend against this kind of A.I.-driven influence, we will be completely exposed. Brett J. Goldstein leads the Wicked Problems Lab at the Vanderbilt University Institute of National Security and is a former Pentagon official. Brett V. Benson is an associate professor of political science at Vanderbilt and a faculty affiliate at its Institute of National Security. Source photographThe Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: letters@ Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

Danish citizens to ‘own their own faces' to prevent deepfakes
Danish citizens to ‘own their own faces' to prevent deepfakes

Times

time28-06-2025

  • Politics
  • Times

Danish citizens to ‘own their own faces' to prevent deepfakes

Denmark plans to become the first country in the world to give its citizens copyright over their faces and voices in an effort to clamp down on 'deepfakes' — videos, audio clips and images that are digitally doctored to spread false information. In recent years the tools for making deepfakes, including artificial intelligence-assisted editing software, have become so sophisticated and ubiquitous that it takes not much more than a few clicks of a mouse to create them. They are already endemic in the political sphere and were deployed during recent election campaigns in Slovakia, Turkey, Bangladesh, Pakistan and Argentina. The former US president Joe Biden was subjected to an audio deepfake during the Democratic presidential primary in New Hampshire last year. In November an MP from the German Social Democratic party was reprimanded for posting a deepfake video of Friedrich Merz, the conservative leader and future chancellor, saying that his party 'despised' the electorate. The Danish culture ministry said it would soon no longer be possible to distinguish between real and deepfake material. That in turn would undermine trust in authentic pictures and videos, it warned. 'Since images and videos swiftly become embedded in people's subconscious, digitally manipulated versions of an image or video can establish fundamental doubts and perhaps even a completely wrong perception of genuine depictions of reality.' There is now broad cross-party support in Denmark's parliament for a reform to the copyright law that would make it illegal to share deepfakes. The bill includes a special protection for musicians and performing artists against digital imitations. 'We are now sending an unequivocal signal to all citizens that you have the right to your own body, your own voice and your own facial features,' said Jakob Engel-Schmidt, the culture minister. Lars Christian Lilleholt, the parliamentary leader of the Danish Liberal party, which is part of the ruling coalition, said AI tools had made it alarmingly easy to impersonate politicians and celebrities and to exploit their aura of credibility to propagate false claims. 'It is not just harmful to the individual who has their identity stolen,' he said. 'It is harmful to democracy as a whole when we cannot trust what we see.' The reform will include an exemption for parody and satire. This is a thorny area: several studies suggest a large proportion of political deepfakes are humorous or harmless rather than malicious. There are some experts who warn that concern about the phenomenon risks tipping over into a moral panic. In April last year Mette Frederiksen, Denmark's Social Democratic prime minister, was targeted with an AI-generated deepfake that fell into this grey area. After her government announced that it was abolishing a Christian public holiday, the right-wing populist Danish People's Party released a video of a fake press conference where Frederiksen appeared to say she would scrap all the other religious holidays, including Easter and Christmas. The clip, which was presented as a dream sequence and clearly labelled as AI-manipulated content, prompted debate about the acceptable boundaries of the technology.

Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation
Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation

Washington Post

time27-06-2025

  • Politics
  • Washington Post

Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation

COPENHAGEN, Denmark — Denmark is taking steps toward enacting a ban on the use of 'deepfake' imagery online, saying such digital manipulations can stir doubts about reality and foster misinformation. The government said in a statement published Thursday that a 'broad cross section' of parties in parliament support greater protections against deepfakes and a planned bill is expected to make it illegal to share them or other digital imitations of personal characteristics.

Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation
Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation

The Independent

time27-06-2025

  • Politics
  • The Independent

Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation

Denmark is taking steps toward enacting a ban on the use of 'deepfake' imagery online, saying such digital manipulations can stir doubts about reality and foster misinformation. The government said in a statement published Thursday that a 'broad cross section' of parties in parliament support greater protections against deepfakes and a planned bill is expected to make it illegal to share them or other digital imitations of personal characteristics. Culture Minister Jakob Engel-Schmidt, in a statement, said that it was 'high time that we now create a safeguard against the spread of misinformation and at the same time send a clear signal to the tech giants.' Officials said the measures are believed to be among the most extensive steps yet taken by a government to combat misinformation through deepfakes, which refers to highly realistic but fabricated content created by artificial intelligence tools. Deepfakes usually come in the form of pictures or video but can also be audio. They can make it appear that someone said or did something that they didn't actually say or do. Famous figures who have been depicted in deepfakes include Taylor Swift and Pope Francis. Authorities in different countries have taken varying approaches to tackling deepfakes, but they've mostly focused on sexually explicit images. U.S. President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person's consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms. Supporters of the Danish idea say that as technology advances, it will soon be impossible for people online to distinguish between real and manipulated material. 'Since images and videos also quickly become embedded in people's subconscious, digitally manipulated versions of an image or video can create fundamental doubts about — and perhaps even a completely wrong perception of — what are genuine depictions of reality,' an English translation of a ministry statement said. 'The agreement is therefore intended to ensure the right to one's own body and voice.' The proposal would still allow for 'parodies and satire' — though the ministry didn't specify how that would be determined. It said that the rules would only apply in Denmark, and violators wouldn't be subject to fines or imprisonment — even if some 'compensation' could be warranted. The ministry said that a proposal will be made to amend Danish law on the issue this summer with an aim toward passage late this year or in early 2026. Any changes must abide by the country's international obligations and European Union law, it said.

Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation
Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation

Associated Press

time27-06-2025

  • Politics
  • Associated Press

Denmark seeks to make it illegal to spread deepfake images, citing concern about misinformation

COPENHAGEN, Denmark (AP) — Denmark is taking steps toward enacting a ban on the use of 'deepfake' imagery online, saying such digital manipulations can stir doubts about reality and foster misinformation. The government said in a statement published Thursday that a 'broad cross section' of parties in parliament support greater protections against deepfakes and a planned bill is expected to make it illegal to share them or other digital imitations of personal characteristics. Culture Minister Jakob Engel-Schmidt, in a statement, said that it was 'high time that we now create a safeguard against the spread of misinformation and at the same time send a clear signal to the tech giants.' Officials said the measures are believed to be among the most extensive steps yet taken by a government to combat misinformation through deepfakes, which refers to highly realistic but fabricated content created by artificial intelligence tools. Deepfakes usually come in the form of pictures or video but can also be audio. They can make it appear that someone said or did something that they didn't actually say or do. Famous figures who have been depicted in deepfakes include Taylor Swift and Pope Francis. Authorities in different countries have taken varying approaches to tackling deepfakes, but they've mostly focused on sexually explicit images. U.S. President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person's consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms. Supporters of the Danish idea say that as technology advances, it will soon be impossible for people online to distinguish between real and manipulated material. 'Since images and videos also quickly become embedded in people's subconscious, digitally manipulated versions of an image or video can create fundamental doubts about — and perhaps even a completely wrong perception of — what are genuine depictions of reality,' an English translation of a ministry statement said. 'The agreement is therefore intended to ensure the right to one's own body and voice.' The proposal would still allow for 'parodies and satire' — though the ministry didn't specify how that would be determined. It said that the rules would only apply in Denmark, and violators wouldn't be subject to fines or imprisonment — even if some 'compensation' could be warranted. The ministry said that a proposal will be made to amend Danish law on the issue this summer with an aim toward passage late this year or in early 2026. Any changes must abide by the country's international obligations and European Union law, it said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store