logo
Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

Yahoo29-05-2025

It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs.
This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election.
While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content.
At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references.
In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI.
Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information – getting facts wrong – disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook.
Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans.
Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember – they help us feel. They foster emotional connections and shape our interpretations of social and political events.
This makes them especially powerful tools for persuasion – and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data.
Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up.
Narratives are not confined to the content users share – they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience.
For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations.
Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection – one that considers not just what is said but who appears to be saying it and why.
Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between.
Humans handle this effortlessly – we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge.
Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion.
Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation.
Consider the following sentence: 'The woman in the white dress was filled with joy.' In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive.
In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions.
Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time.
In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable.
Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root.
As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Mark Finlayson, Florida International University and Azwad Anjum Islam, Florida International University
Read more:
Disinformation campaigns are murky blends of truth, lies and sincere beliefs – lessons from the pandemic
Visual misinformation is widespread on Facebook – and often undercounted by researchers
Disinformation is rampant on social media – a social psychologist explains the tactics used against you
Mark Finlayson receives funding from US Department of Defense and the US National Science Foundation for his work on narrative understanding and influence operations in the military context.
Azwad Anjum Islam receives funding from Defense Advanced Research Projects Agency (DARPA).

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Live Q&A: The Big Blowup—We Answer Your Questions About the Trump/Musk Feud
Live Q&A: The Big Blowup—We Answer Your Questions About the Trump/Musk Feud

Wall Street Journal

time8 minutes ago

  • Wall Street Journal

Live Q&A: The Big Blowup—We Answer Your Questions About the Trump/Musk Feud

What questions do you have about the falling-out between President Trump and Elon Musk? Long-simmering tensions between Trump and Musk burst into the open on Thursday. In a rupture that could have serious consequences for both men as well as for broad federal initiatives and policies, the two traded barbs and insults on social media and threatened to use their power against one another.

Cheerleaders for Violence: The Troubling Defense of Terror in Boulder
Cheerleaders for Violence: The Troubling Defense of Terror in Boulder

Yahoo

time9 minutes ago

  • Yahoo

Cheerleaders for Violence: The Troubling Defense of Terror in Boulder

It was supposed to be a peaceful demonstration. Instead, it ended in flames. But what came after the attack in Boulder may be even more incendiary, especially online, where some users in their teens and 20s were not condemning the violence. They were endorsing it. On a clear afternoon in late May, a pro-Israel demonstration on Boulder, Colorados iconic Pearl Street Mall turned into a scene of terror. An Egyptian citizen, wielding improvised firebombs, attacked the crowd, injuring 15 people and igniting panic in a city more often associated with peaceful protests and college town calm. Authorities swiftly arrested the suspect, now charged with multiple felonies including attempted murder and arson. Law enforcement has labeled the incident an act of terrorism. But while the violence rattled the city and the Jewish community in particular, a very different response was unfolding online. Videos posted by major outlets such as ABC News, Daily Mail, and MSNBC quickly amassed thousands of views on TikTok and Instagram. In the comment sections, a disturbing trend emerged: Rather than denouncing the attack, many young users applauded it. "He just wanted freedom for Palestine." "Keep up the good work brother! Hero." "Free him, he did no wrong. He did what we all wanted." "I was about to comment about how terrible this is and then I realized it was a pro-Israel rally and I suddenly didnt feel bad anymore." Some expressed outright Jew-hatred, writing things like, "Reduce their population" and "We owe Germany an apology." Others painted the attacker as a martyr or revolutionary. Several claimed the incident was staged entirely, a so-called "false flag" to build sympathy for Israel. This chorus of justification, denial, and celebration is jarring but not entirely surprising given the current climate. Recent polling shows a dramatic shift in how young Americans view the Israeli-Palestinian conflict. According to Pew Research Center, 53% of U.S. adults now hold an unfavorable view of Israel. Support for the Palestinian cause has grown, particularly among Democrats and younger voters. In one RealClearPolling analysis,respondentsunder 28 years old were more likely than any other age group to sympathize with Palestinians over Israelis and to view Israels military actions in Gaza as unjustified. As someone from this generation, and from Boulder, Ive watched these sentiments evolve online, where politics blur with memes and moral lines often collapse under the weight of outrage or irony. Seeing this unfold in my own hometown made it feel less like an aberration and more like a wake-up call. Whats chilling isnt just the cruelty of the comments. Its how natural they seem to the people posting them, many of whom are my peers. Layered atop this political shift is a deepening distrust of institutions. A significant share of younger Americans express skepticism toward government narratives, traditional news media, and even the legitimacy of domestic law enforcement. According to the spring 2025 Harvard Youth Poll, fewer than one in three express trust in major institutions. But when that skepticism is applied to something as clear and violent as the Boulder attack, is it truly thoughtful or is it reflexive, corrosive doubt - the kind that opens the door to conspiracism and moral disengagement? That mindset helps explain the abundance of conspiracy-laden responses: "Yeah they set this up. Dont believe it at all," read one comment. Another called it a "planned distraction," while others insisted it was staged with actors. Though many of these reactions remain anonymous and ephemeral, they point to a generational divide not just in foreign policy, but in the moral frameworks through which violence is interpreted. Zoe Mardiks, a recent graduate of the University of Colorado Boulder and a Jewish student leader, was at her apartment when she learned of the attack. "My first reaction was to text some of my other Jewish friends to check in and ensure that everyone was okay and safe," she said. "I felt very scared that this had happened in my community." What disturbed her just as much as the attack itself was the flood of online comments defending it. "The ongoing justification for violence significantly downplays the rights of Jews and Israel to exist," Mardiks said. In her view, social media has warpedher generations sense of moral clarity. "Because of how the war has been broadcast on social media, everyone feels they have a say in the issue and believes they possess all the knowledge," she said. Mardiks said her response to those defending the attacker is simple: "If you truly care about saving or freeing anyone, we can only do that by educating each other in a non-attacking way … the line is drawn when you praise violence." The Boulder attack marks a grim milestone: a foreign conflict spilling onto American soil in the form of violence, and met, in some corners of the Internet, with tacit approval. That many of those corners are populated by Americans under 30 raises hard questions about what this generation, my generation, believes, whom they stand with, and what they consider justifiable resistance. For us, the line between protest and terrorism used to feel clear. Now, for too many, that line seems negotiable. "He did what we all wanted." If thats true, we may need to start asking what "we" really means now. Adair Teuton is a 2025 intern with RealClearPolitics.

Amidst Protections For The LGBTQ+ Community Being Erased, GOP Senators Want To Designate June As "Life Month"
Amidst Protections For The LGBTQ+ Community Being Erased, GOP Senators Want To Designate June As "Life Month"

Yahoo

time9 minutes ago

  • Yahoo

Amidst Protections For The LGBTQ+ Community Being Erased, GOP Senators Want To Designate June As "Life Month"

Senators Todd Young (Ind.) and Ted Cruz (Texas) are leading Republican efforts to brand June as 'Life Month' during the already celebrated Pride Month, which honors the achievements and culture of the LGBTQ+ community. 'Every human life is worthy of protection, and it is especially incumbent upon Americans and lawmakers to protect the most vulnerable among us,' Cruz said in a statement on X, formerly Twitter. @SenTedCruz / Via According to Young's statement introducing the resolution, 'Life Month' is meant to 'recognize the dignity of human life, commends those who promote life, and encourages policymakers to continue providing resources to empower women and families to choose life.' Related: This Senator's Clap Back Fully Gagged An MSNBC Anchor, And The Clip Is Going Viral While the senators did have 11 other months they could designate as 'Life Month,' they said the resolution marks June as the anniversary of the Supreme Court overturning Roe v. Wade in June 2022. Cruz and Young's pitch rings similar to Illinois Congresswoman Mary Miller's resolution, which suggested fully replacing 'Pride Month' with 'Family Month.' Related: This Republican Lawmaker's Embarrassing Lack Of Knowledge Of The Term "Intersex" Went Viral After He Proposed An Amendment To Cut LGBTQ+ Funding 'By recognizing June as Family Month, we reject the lie of 'Pride' and instead honor God's timeless and perfect design,' Miller told right-wing news site The Daily Wire. Neither Cruz nor Young said they want to replace Pride Month, but their actions come at a time where the Trump administration has launched aggressive attacks against the LGBTQ+ community. Under his administration, President Donald Trump has erased or altered Centers for Disease Control and Prevention pages focused on the risks of suicide among LGBTQ children, school safety, and health disparities. He has also signed executive orders that declared it official U.S. policy that there are only two sexes, male and female, and banned people with gender dysphoria from military service. Earlier this week, Trump's Department of Education also formally declared June as 'Title IX Month.' On Tuesday, Defense Secretary Pete Hegseth ordered the Navy to rename the USNS Harvey Milk, which honors the slain LGBTQ+ rights icon. This move reportedly was intentionally made during Pride Month. A total of 26 Republican senators co-sponsored the resolution, including Alabama's Tommy Tuberville and South Carolina's Lindsey Graham. 'If we are going to dedicate entire months to recognizing every group under the sun, the least we can do is dedicate June to protecting unborn babies,' Tuberville said on X. @SenTuberville / Via This article originally appeared on HuffPost. Also in In the News: People Can't Believe This "Disgusting" Donald Trump Jr. Post About Joe Biden's Cancer Diagnosis Is Real Also in In the News: Republicans Are Calling Tim Walz "Tampon Tim," And The Backlash From Women Is Too Good Not To Share Also in In the News: "We Don't Import Food": 31 Americans Who Are Just So, So Confused About Tariffs And US Trade

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store