logo
#

Latest news with #FloridaInternationalUniversity

Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns
Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns

Time of India

time2 days ago

  • Politics
  • Time of India

Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns

HighlightsThe Cognition, Narrative and Culture Lab at Florida International University is developing artificial intelligence tools to detect disinformation campaigns that utilize narrative persuasion techniques. Disinformation, which is intentionally fabricated to mislead, differs from misinformation, and recent incidents like the manipulation of social media by foreign adversaries have highlighted the dangers of such tactics in influencing U.S. politics. AI systems are being trained to recognize cultural nuances and narrative structures, enabling better identification of disinformation that exploits symbols and sentiments within targeted communities. It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defences against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University , we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. Disinformation vs misinformation In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information - getting facts wrong - disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate US politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember - they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion - and, consequently, for spreading disinformation. A compelling narrative can override scepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Usernames, cultural context and narrative time Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share - they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection - one that considers not just what is said but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly - we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia , where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponises symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Who benefits from narrative-aware AI? Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be sceptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.

Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns
Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns

Time of India

time3 days ago

  • Politics
  • Time of India

Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns

It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Save Big on Makeovers Kitchen Magic Get Quote Undo While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defences against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University , we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. Live Events Disinformation vs misinformation Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information - getting facts wrong - disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate US politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember - they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion - and, consequently, for spreading disinformation. A compelling narrative can override scepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Usernames, cultural context and narrative time Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share - they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection - one that considers not just what is said but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly - we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponises symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Who benefits from narrative-aware AI? Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be sceptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns
Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

Yahoo

time5 days ago

  • Politics
  • Yahoo

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information – getting facts wrong – disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember – they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion – and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share – they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection – one that considers not just what is said but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly – we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: 'The woman in the white dress was filled with joy.' In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Mark Finlayson, Florida International University and Azwad Anjum Islam, Florida International University Read more: Disinformation campaigns are murky blends of truth, lies and sincere beliefs – lessons from the pandemic Visual misinformation is widespread on Facebook – and often undercounted by researchers Disinformation is rampant on social media – a social psychologist explains the tactics used against you Mark Finlayson receives funding from US Department of Defense and the US National Science Foundation for his work on narrative understanding and influence operations in the military context. Azwad Anjum Islam receives funding from Defense Advanced Research Projects Agency (DARPA).

Can dogs help to predict epileptic seizures?
Can dogs help to predict epileptic seizures?

RTÉ News​

time7 days ago

  • Health
  • RTÉ News​

Can dogs help to predict epileptic seizures?

Analysis: New research combining a dog's natural instincts with technology might help people with epilepsy by predicting seizures Dogs have long held a special place in Irish life, not just as loyal companions, but as trusted protectors of our homes, families and livestock. Their protective instinct is one of the earliest reasons humans formed bonds with them. Their importance goes back centuries featuring in tales of Fionn MacCumhaill and his dogs Bran and Sceólang. They were noble, wise, and fiercely loyal hounds, often sensing danger before it arrived and intervening to save lives. Today, while most dogs are no longer fending off wolves, their protective instincts remain strong. Some even find a vocation as service dogs extending their role to emotional and physical protection, guiding the visually impaired, alerting people to medical emergencies and providing support to those with PTSD or anxiety. This deep-rooted role of dogs as protectors has inspired new research, looking at how a dog's natural instincts, combined with technology might help people with epilepsy by predicting seizures and enabling faster intervention. Why predicting seizures matters More than 45,000 people in Ireland live with epilepsy, about one person in every hundred. Each year, Ireland records over 130 epilepsy-related deaths, some linked to a condition called sudden unexpected death in epilepsy (SUDEP). For some, medication effectively controls seizures. For others, seizures can happen without warning, disrupting daily life and posing serious safety risks. While some people experience early warning signs in the form of a type of seizure, commonly called auras, others receive no advance notice. For those who do sense them, auras offer valuable time to seek safety or alert others. But what if someone, or something, could detect a seizure even earlier and more reliably? That question is at the heart of the PAWSENSE (Paws Assisting with Sensors for Epilepsy Needs and Safety) research project. This project, led by the Insight Research Ireland Centre for Data Analytics at Dublin City University, explores whether seizure-alert assistance dogs, equipped with wearable technology, can help detect and alert to seizures before they happen. The science behind the sniff The concept rests on a growing body of research suggesting that dogs can detect seizures before they occur by picking up on subtle changes in human scent. These changes involve volatile organic compounds (VOCs), which are released through our breath, skin, sweat, and other bodily fluids. Everyone's VOC profile is unique, and research indicates it may change prior to a seizure. Studies from Florida International University and Canine Assistants have shown that specially trained dogs can detect these changes up to an hour in advance of a seizure. Research by Dr. Neil Powell and a team at Queen's University Belfast found that even untrained pet dogs exhibit unusual behaviour, such as nudging, barking or pawing, when their owner is about to have a seizure. What dogs can do with smell is extraordinary. While humans have around five million scent receptors in their noses, dogs can have up to 300 million, depending on the breed. Dogs can layer scents, the way humans might hear individual instruments in an orchestra. This allows them to pick up a specific odour, like a pre-seizure VOC, even in a crowded, noisy, or chaotic environment. Dog collar motion sensors While previous efforts focused solely on training dogs to detect and alert to seizures, this research adds a vital second layer with a sensor-equipped dog collar. When the dog detects the scent associated with an oncoming seizure, it responds with a specific trained alert behaviour, such as spinning. This alerts to the person that a seizure is imminent. From Insider Science, how dogs sniff out seizures The dog wears a collar with motion sensors that track the alerting spin behaviour. When the collar detects this movement, it can automatically send an alert to a family member, friend or other designated contact, along with the GPS location of the person who is about to have a seizure. This advance notice gives the individual time to get to safety and allows others to step in and provide support. Safeguarding dogs in research The idea of dogs playing a role in medical detection is not new. They have been studied in contexts ranging from cancer detection to hypoglycaemia alerts. However, epilepsy poses distinct challenges as seizures often happen silently and without warning, which demands both fast and reliable alert systems. Training the dogs and carefully matching them to individuals is critical. Although seizure alert dogs are not currently available through Dogs for the Disabled, they are managing the matching, training and ongoing support for the dogs as part of this research. Dogs have spent thousands of years watching over us and it important that the same is done for them. What science is beginning to measure, dogs have long been able to sense 'Not a silver bullet' Wearable technology has made significant strides in detecting certain types of seizures, but its ability to reliably predict seizures is still limited. Combining technology with dogs' scent detection capabilities may offer a more adaptive and reliable solution, but PAWSENSE is not a silver bullet. It is a multidisciplinary effort addressing a complex issue with nuance and care. If successful, this model could be adapted internationally, or even applied to other conditions where behavioural or scent cues signal health events. Crucially, the project acknowledges the emotional toll epilepsy takes. A dependable alert system might not stop seizures from happening, but it can still make a big difference. Just knowing that a warning will come, and that help is on the way, may ease anxiety and bring peace of mind, helping people feel safer and more confident in their daily lives. The question remains however, can such a system become broadly accessible through public healthcare, or will it remain a niche, privately supported tool? What science is beginning to measure, dogs have long been able to sense. This research provides a glimpse into how we might learn from nature to shape the healthcare tools of tomorrow.

Trump's immigration crackdown unnerves Cuban exiles long shielded from deportation
Trump's immigration crackdown unnerves Cuban exiles long shielded from deportation

Los Angeles Times

time27-05-2025

  • Politics
  • Los Angeles Times

Trump's immigration crackdown unnerves Cuban exiles long shielded from deportation

MIAMI — Immigration officials said Tomás Hernández worked in high-level posts for Cuba's foreign intelligence agency for decades before migrating to the United States to pursue the American dream. The 71-year-old was detained by federal agents outside his Miami-area home in March and accused of hiding his ties to Cuba's Communist Party when he obtained permanent residency. Cuban Americans in South Florida have long clamored for a firmer hand with Havana, and the recent apprehensions of Hernández and several other former Cuban officials for deportation have been extremely popular among the politically powerful exile community. 'It's a political gift to Cuban American hardliners,' said Eduardo Gamarra, a Latin American expert at Florida International University. But many Cubans fear they could be next on Trump's list, he said, and 'some in the community see it as a betrayal.' While President Trump's mass deportation pledge has frightened migrants from many nations, it has come as something of a shock to the 2.4 million Cuban Americans, who strongly backed the Republican twice and have long enjoyed a place of privilege in the U.S. immigration system. Amid record arrivals of migrants from the Caribbean island, Trump in March revoked temporary humanitarian parole for about 300,000 Cubans. Many have been detained ahead of possible deportation. Among those facing deportation is a pro-Trump Cuban rapper behind the hit song 'Patria y Vida' — 'Homeland and Life' — that became the unofficial anthem of anti-communist protests on the island in 2021 and drew praise from the likes of then-Republican Sen. Marco Rubio, now secretary of State. Eliéxer Márquez, who raps under the name El Funky, said he received notice this month that he had 30 days to leave the U.S. Thanks to Cold War laws aimed at removing Fidel Castro, Cuban migrants for many decades enjoyed almost automatic refugee status in the U.S. and could obtain green cards a year after entry, unlike migrants from virtually every other country. Support for Trump among likely Cuban American voters in Miami was at an all-time high on the eve of last year's election, according to a poll by Florida International University, which has been tracking the Cuban American community since 1991. Trump rarely mentions Cubans in his attacks on migrant targets including Venezuelans and Haitians. That has given many Cubans hope that they will remain immune to immigration enforcement actions. Democrats, meanwhile, have been trying to turn the immigration crackdown to their advantage. In April, grassroots groups erected two giant billboards on Miami highways calling Rubio and Republican Reps. Mario Díaz-Balart, María Elvira Salazar and Carlos Giménez 'traitors' to the Cuban American community for failing to protect tens of thousands of migrants from Trump's immigration policies. The arrest of former Cuban state agents is one way to bolster Trump allies, Gamarra said. In March, Giménez sent Homeland Security Secretary Kristi Noem a letter with the names of 108 people he said were former Cuban state agents or Communist Party officials living unlawfully in the U.S. 'It is imperative that the Department of Homeland Security enforce existing U.S. laws to identify, deport and repatriate these individuals who pose a direct threat to our national security, the integrity of our immigration system and the safety of Cuban exiles and American citizens alike,' Giménez wrote, adding that the U.S. remains a 'beacon of hope and freedom for those escaping tyranny.' Giménez's target list was compiled by Luis Dominguez, who left Cuba in 1971 and has made it his mission to topple Cuba's government. In 2009, when the internet was still a novelty in Cuba, Dominguez said he posed as a 27-year-old female sports journalist from Colombia to lure Castro's son Antonio into an online romance. 'Some people dream with making money, or with growing old and going on vacation,' said Dominguez, who lives in Connecticut. 'I dream with seeing my country free.' With support from the right-wing Foundation for Human Rights in Cuba, he started combing social media and relying on a well-oiled network of anti-socialist sources, inside Cuba and outside the country, to dox officials allegedly behind human rights abuses and violations of democratic norms. To date, his website, Represores Cubanos — Cuban Repressors — has identified more than 1,200 such state agents, some 150 in the United States. 'They're chasing the American dream, but previously they condemned it while pursuing the Cuban dream,' Dominguez said. 'It's the typical double life of any Communist regime. When they were in power they criticized anything about the U.S. But now that they're here, they love it.' Dominguez, 62, said he regularly shares his findings with federal law enforcement. A spokesman for U.S. Immigration and Customs Enforcement didn't comment on the agency's relationship with the activist. Enrique Garcia, a former colleague, said he studied with Hernández in the former Soviet Union in the 1970s. Upon their return, Hernández was sent to work in the spy agency's elite 'North America' department, said Garcia. Garcia, who defected to the U.S. in the 1990s and has devoted himself to helping American spy catchers unmask Cuban agents, said one-time Cuban agents have infiltrated the current migration wave while hiding their past and even current loyalties to the Cuban government. 'You can't be on both sides at the same time,' he said. It's not known when Hernández entered the U.S. and why. U.S. immigration law generally bars people who've belonged to Communist parties. Anyone caught lying on their green card application can be deported or prosecuted. But removing Cubans who are no longer welcome in the U.S. could prove challenging. The Trump administration sends a single 60-passenger plane to Cuba every month as part of its deportation drive, unchanged from the past year's average, according to Witness at the Border, which tracks removal flights. At that rate, it would take almost 700 years to send back the estimated 500,000 Cubans who arrived during the Biden administration and now lack protected status. At Versailles Restaurant, the epicenter of Miami's Little Havana, few among its anti-Communist clientele seemed poised to turn on Trump, who visited the iconic cafe twice during the recent presidential campaign. Most of the aging exiles applauded Trump's migration crackdown overhaul but there were a few cracks in the GOP armor. As the late afternoon banter switched between talk of CIA plots to assassinate Castro and President John F. Kennedy's failure to provide air cover during the 1961 Bay of Pigs invasion, one retiree stood up and quietly stepped away from his friends. 'People are trembling,' Tony Freitas, who came to the U.S. from Cuba in the 1980 Mariel boatlift, said in a hushed voice. 'For any little thing, you could be deported.' Goodman writes for the Associated Press. AP journalist Gisela Salomon contributed to this report.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store