Latest news with #OperationOverload


Euractiv
14-07-2025
- Politics
- Euractiv
Whack-a-mole warfare: Europe's battle against AI-fuelled Kremlin lies
Chris Kremidas-Courtney is a senior visiting fellow at the European Policy Centre, associate fellow at the Geneva Centre for Security Policy, and author of 'The Rest of Your Life: Five Stories of Your Future.' Since June 2024, the Kremlin-driven Operation Overload has become Europe's most sustained disinformation blitz against the fact-checking community. According to a newly released report by CheckFirst and Reset Tech, Operation Overload's AI-driven narrative variants are popping up faster than fact-checkers can knock them down, turning Europe's information space into a perpetual game of disinformation whack-a-mole. If Brussels doesn't harness the full force of the Digital Services Act now, demanding real-time platform accountability and enabling cross-border threat-sharing, the next wave of state-sponsored fakes could reshape our public discourse before we even spot the first lie. The question is, can Brussels and the platforms it seeks to regulate keep pace with a campaign that adapts as quickly as the neural networks powering it? Since September 2024, the Russian-backed operation (also known as Matryoshka) has more than doubled its email attacks, overwhelming media and fact-checking communities with an average of 2.6 fabricated pitches per day – more than doubling since the campaign began. In the same way that waves of drones or missiles can overwhelm air defences, this operation seeks to do the same to journalists and fact-checkers. Fake emails are only the tip of the iceberg for this coordinated propaganda machine. According to the Overload report this includes 11,000 crypto-themed 'reposter' bots on Twitter/X and thousands of deepfake videos. AI-driven content creation has become the operation's backbone. Deepfake audio, AI-generated images and 'magazine cover' forgeries now churn out at scale, each twisted around a 'kernel of truth'. To evade detection they include isolated reports, such as Ukrainian call-centre irregularities, or decontextualise verifiable details of events. This perpetual 'whack-a-mole' exhausts journalistic resources and fragments the fact-checking ecosystem. For example, CheckFirst logged 180 debunks, yet fewer than half were framed within the larger Operation Overload context. But all these numbers still undersell the operation's enormity. In February 2025, the American Sunlight Project found that Kremlin-aligned networks were already producing over three million AI-forged articles per year – a tsunami of disinformation now poisoning AIs like ChatGPT and Gemini, eating away at our digital discourse from within. For example, across France, Poland and Moldova, Overload adapted its four pillars of anti-Ukrainian vitriol, election scares, identity smears and calls to violence to local flashpoints (e.g. Macron, historical grievances, Sandu's legitimacy). Such targeted campaigns require equally tailored counter-messaging since one-size-fits-all rebuttals leave gaps for the next hostile narrative. Most revealing is who amplifies these lies. High-profile Kremlin-aligned 'amplifier' accounts on Twitter/X whose synchronized behaviour lends the campaign mainstream reach, grants Operation Overload an aura of mainstream credibility. While the direct link between these influencers and Russian state agencies remains opaque, their synchronised behaviour and consistent prioritisation by platform algorithms indicate an operation that transcends mere grassroots trolling. Under the EU's Digital Services Act , Very Large Online Platforms (VLOPs) must swiftly mitigate systemic risks such as election interference and incitement to violence. Yet over 70% of flagged content lingered online for months, and platforms missed reactivated accounts and paid-for authentication abuse. If the EU allows this to persist by eschewing public audits, fines or mandated transparency, the DSA risks becoming little more than window dressing, ill-suited to protect against state-sponsored disinformation. A four-pillar defence Europe can't treat each Overload hit as an oddity. Instead, it must tackle AI-enabled disinformation with four coordinated efforts: Real-time, multi-platform threat sharing: Set up a shared dashboard with encrypted feeds so that the moment one fact-checking group or platform spots a new fake image, bot network or edited video, it automatically alerts everyone else so they can all block it before it spreads. Scalable AI-detection investment: Invest in AI systems that can automatically scan millions of videos, images and posts every hour, flagging deepfakes and bulk-generated disinformation so platforms and fact-checkers can remove them before they go viral. Give the DSA teeth: Publicly name and sanction non-compliant VLOPs, demand rapid takedowns under Articles 34–35, and require quarterly transparency reports on coordinated inauthentic behaviour. Narrative literacy campaigns: Launch public-awareness campaigns that go beyond debunking individual lies to teaching people how to spot when a misleading story is built around a 'kernel of truth' or artificially bulk-produced, so everyone can challenge and report fakes, not just fact-checkers.' Operation Overload is an AI-fuelled, multi-vector threat crafted by Kremlin-aligned actors. The Overload 2 report maps this danger. It's now up to national capitals to forge a robust cognitive defence for Europe's democracy.


WIRED
01-07-2025
- Politics
- WIRED
A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion'
Jul 1, 2025 3:27 PM Consumer-grade AI tools have supercharged Russian-aligned disinformation as pictures, videos, QR codes, and fake websites have proliferated. Photo Illustration: WIRED Staff; Getty Images A pro-Russia disinformation campaign is leveraging consumer artificial intelligence tools to fuel a 'content explosion' focused on exacerbating existing tensions around global elections, Ukraine, and immigration, among other controversial issues, according to new research published last week. The campaign, known by many names including Operation Overload and Matryoshka (other researchers have also tied it to Storm-1679), has been operating since 2023 and has been aligned with the Russian government by multiple groups, including Microsoft and the Institute for Strategic Dialogue. The campaign disseminates false narratives by impersonating media outlets with the apparent aim of sowing division in democratic countries. While the campaign targets audiences around the world, including in the US, its main target has been Ukraine. Hundreds of AI-manipulated videos from the campaign have tried to fuel pro-Russian narratives. The report outlines how, between September 2024 and May 2025, the amount of content being produced by those running the campaign has increased dramatically and is receiving millions of views around the world. In their report, the researchers identified 230 unique pieces of content promoted by the campaign between July 2023 and June 2024, including pictures, videos, QR codes, and fake websites. Over the last eight months, however, Operation Overload churned out a total of 587 unique pieces of content, with the majority of them being created with the help of AI tools, researchers said. The researchers said the spike in content was driven by consumer-grade AI tools that are available for free online. This easy access helped fuel the campaign's tactic of 'content amalgamation,' where those running the operation were able to produce multiple pieces of content pushing the same story thanks to AI tools. 'This marks a shift toward more scalable, multilingual, and increasingly sophisticated propaganda tactics,' researchers from Reset Tech, a London-based nonprofit that tracks disinformation campaigns, and Check First, a Finnish software company, wrote in the report. 'The campaign has substantially amped up the production of new content in the past eight months, signalling a shift toward faster, more scalable content creation methods.' Researchers were also stunned by the variety of tools and types of content the campaign was pursuing. "What came as a surprise to me was the diversity of the content, the different types of content that they started using,' Aleksandra Atanasova, lead open-source intelligence researcher at Reset Tech, tells WIRED. 'It's like they have diversified their palette to catch as many like different angles of those stories. They're layering up different types of content, one after another.' Atanasova added that the campaign did not appear to be using any custom AI tools to achieve their goals, but were using AI-powered voice and image generators that are accessible to everyone. While it was difficult to identify all the tools the campaign operatives were using, the researchers were able to narrow down to one tool in particular: Flux AI. Flux AI is a text-to-image generator developed by Black Forest Labs, a German-based company founded by former employees of Stability AI. Using the SightEngine image analysis tool, the researchers found a 99 percent likelihood that a number of the fake images shared by the Overload campaign—some of which claimed to show Muslim migrants rioting and setting fires in Berlin and Paris—were created using image generation from Flux AI. The researchers were then able to generate images that closely replicate the aesthetic of the published images using prompts that included discriminatory language—such as 'angry Muslim men.' This highlights 'how AI text-to-image models can be abused to promote racism and fuel anti-Muslim stereotypes,' the researchers wrote, adding that it raises 'ethical concerns on how prompts work across different AI generation models.' 'We build in multiple layers of safeguards to help prevent unlawful misuse, including provenance metadata that enables platforms to identify AI generated content, and we support partners in implementing additional moderation and provenance tools,' a spokesperson for Black Forest Labs wrote in an email to WIRED. 'Preventing misuse will depend on layers of mitigation as well as collaboration between developers, social media platforms, and authorities, and we remain committed to supporting these efforts.' Atansova tells WIRED the images she and her colleagues reviewed did not contain any metadata. Operation Overload's use of AI also uses AI-voice cloning technology to manipulate videos to make it appear as if prominent figures are saying things they never did. The number of videos produced by the campaign jumped from 150 between June 2023 and July 2024 to 367 between September 2024 and May 2025. The researchers said the majority of the videos in the last eight months used AI technology to trick those who saw them. In one instance, for example, the campaign published a video in February on X that featured Isabelle Bourdon, a senior lecturer and researcher at France's University of Montpellier, seemingly encouraging German citizens to engage in mass riots and vote for the far-right Alternative for Germany (AfD) party in federal elections. This was fake: The footage was taken from a video on the school's official YouTube channel where Bourdon discusses a recent social science prize she won. But in the manipulated video, AI-voice cloning technology made it seem as if she was discussing the German elections instead. The AI-generated content produced by Operation Overload is shared on over 600 Telegram channels, as well as by bot accounts on social media platforms like X and Bluesky. In recent weeks, the content has also been shared on TikTok for the first time. This was first spotted in May, and while the number of accounts was small—just 13— the videos posted were seen 3 million times before the platform demoted the accounts. "We are highly vigilant against actors who try to manipulate our platform and have already removed the accounts in this report,' Anna Sopel, a TikTok spokesperson, tells WIRED. 'We detect, disrupt and work to stay ahead of covert influence operations on an ongoing basis and report our progress transparently every month.' The researchers pointed out that while Bluesky had suspended 65 percent of the fake accounts, 'X has taken minimal action despite numerous reports on the operation and growing evidence for coordination.' X and Bluesky did not respond to requests for comment. Once the fake and AI generated content is created by Operation Overload, the campaign does something unusual: They send emails to hundreds of media and fact-checking organizations across the globe, with examples of their fake content on various platforms, along with requests for the fact-checkers to investigate if it is real or not. While it may seem counterintuitive for a disinformation campaign to alert those trying to tackle disinformation about their efforts, for the pro-Russia operatives, getting their content posted online by a real news outlet—even if it is covered with the word 'FAKE'—is the ultimate aim. According to the researchers, up to 170,000 such emails were sent to more than 240 recipients since September 2024. The messages typically contained multiple links to the AI-generated content, but the email text was not generated using AI, the researchers said. Pro-Russia disinformation groups have long been experimenting with using AI tools to supercharge their output. Last year a group dubbed CopyCop, likely linked to the Russian government, was shown to be using large language models, or LLMs, to create fake websites designed to look like legitimate media outlets. While these attempts don't typically get much traffic, the accompanying social media promotion can attract attention and in some cases the fake information can end up on the top of Google search results. A recent report from the American Sunlight Project estimated that Russian disinformation networks were producing at least 3 million AI-generated articles each year, and that this content was poisoning the output of AI-powered chatbots like OpenAI's ChatGPT and Google's Gemini. Researchers have repeatedly shown how disinformation operatives are embracing AI tools, and as it becomes increasingly difficult for people to tell real from AI-generated content, experts predict the surge in AI content fuelling disinformation campaigns will continue. 'They already have the recipe that works,' Atanasova says. 'They know what they're doing.'
Yahoo
31-05-2025
- General
- Yahoo
Russia-linked disinfo campaign stokes anti-Ukrainian sentiment in Poland before June 1 vote, investigation finds
Russia-aligned influence campaigns have intensified efforts to spread disinformation targeting Ukrainian refugees in Poland ahead of the country's presidential runoff election on June 1, according to a new investigation by the Institute for Strategic Dialogue (ISD). ISD found that Russia-aligned actors are amplifying anti-Ukrainian sentiment through coordinated campaigns across platforms such as X (formerly Twitter), Bluesky, Facebook, and Telegram. These efforts include operations like "Operation Overload" and the pro-Kremlin network "Pravda/Portal Kombat," which use impersonation, AI-generated content, and coordinated amplification to push false narratives. One Operation Overload campaign claimed that Ukrainian refugees were preparing terrorist attacks targeting the Polish elections, garnering over 654,000 views and nearly 5,800 interactions on X. Another falsely accused Ukrainians of plotting attacks on politicians in neighboring countries. The investigation, published on May 30, highlighted that ChatGPT replicated misleading claims from the Pravda network, including accusations that Ukrainians were responsible for a rise in violent crime in Poland. A satirical video about refugees was manipulated by a pro-Kremlin influencer to portray Ukrainians as exploiting Poland's welfare system, sparking calls for deportations and online hate. The influencer's post alone received 161,500 views, 900 shares, and 380 comments, many of which were derogatory. ISD warns that immigration has become a key issue in the Polish election discourse, noting that both remaining presidential candidates have taken positions targeting Ukrainian refugees. Candidate Rafal Trzaskowski proposed halting child benefits for non-working refugees, while Karol Nawrocki suggested placing them last in line for public services. The investigation urges Polish authorities to remain vigilant against Russia-backed disinformation that fuels discrimination and societal division. ISD also calls on platforms to meet their obligations under the EU's Digital Services Act by clearly labeling AI-generated content and addressing systemic risks to electoral integrity. The European Commission is urged to expand enforcement of sanctions on Russian-linked aggregators and to coordinate with internet service providers to counter foreign information manipulation more effectively. Read also: Ukraine watches closely as Poland faces polarizing presidential run-off We've been working hard to bring you independent, locally-sourced news from Ukraine. Consider supporting the Kyiv Independent.