Latest news with #Kremlin-backed


New Straits Times
a day ago
- General
- New Straits Times
Young Ukrainian women see reporting from frontline as a duty
WHEN Russia invaded Ukraine in early 2022, Olha Kyrylenko was at home watching images from her colleagues risking their lives to cover the siege of the port city of Mariupol. "I asked myself whether I could work in such conditions at all," said Kyrylenko, now a 26-year-old reporter for the leading media outlet Ukrainska Pravda. "And I was like, well, I have to at least try," she said during a rare break in war-torn eastern Ukraine. She went to cover the front lines for the first time shortly after Russia invaded, and noticed that she was far from the only woman. "All my friends, journalists working in the war, are women," Kyrylenko said. While women journalists had already been covering fighting between Ukraine and Kremlin-backed separatists since 2014, a new generation emerged in 2022. Mobilisation in Ukraine's army is obligatory only for men, but the country has seen more and more women joining its ranks. Two Ukrainska Pravda journalists have been drafted into the army, including the photographer Kyrylenko worked with on her first reporting trip to the front in 2022. Since then, she has been working on her own. That was also the case for Viktoria Roshchyna, whose death in Russian detention last year highlighted the risks taken by Ukrainian journalists covering the war. The 27-year-old went missing in 2023 during a high-risk trip to territory occupied by Russian forces. Her body was sent back only in February and bearing signs of torture, according to a media investigation. Kyrylenko worked with Roshchyna and remembers her as "tenacious" and ready to work where no one else would. But Kyrylenko said her death had forced her to think hard about whether journalism "is worth risking your life". In April, Kyrylenko was reporting in Pokrovsk, a vital frontline logistics hub where fighting is fierce, on her mother's birthday. She promised her mother that nothing would happen to her. But, she said, "my life right now is not the highest value in my life". The main thing is "that my country as a country should survive and that the truth about this war, whatever it is, should be present in the information space". Keeping a professional distance as a Ukrainian journalist covering the war can be difficult. Alina Yevych, a 25-year-old reporter, said she had managed — for a while. Then she met a woman who said she had been kidnapped and raped for a week by Russian soldiers in Mariupol. After hearing her words, "I don't know how to be objective", said Yevych, who works along with her boss Maria Davydenko for Vchasno, an independent news outlet. Yevych said soldiers they interview sometimes found it hard to believe that women could understand how tanks work or listen to their stories without flinching. Mentalities are changing, Yevych said, but "for some people, you really remain a girl in this war". Vyacheslav Maryshev, editor in chief of the visuals department at Suspilne, a state-funded news organisation, in Kharkiv in northeast Ukraine, said his female employees tended to take fewer unnecessary risks. The men sometimes want to act like "Rambo" to prove their bravery, he said, but in his team of war reporters there are more women than men. One of them, Oleksandra Novosel, said she had just convinced her bosses to invest in bulletproof jackets more suited to women's body shapes. At the start of the invasion, one of the vests available at Suspilne weighed 12kg — around a quarter of her weight. "I walked around in it and wobbled," Novosel recalled. The 30-year-old said she would prefer not to need a bulletproof vest, and had not imagined working in a warzone until her country became one. She would rather be covering courts or investigating corruption, she said, but for the moment, reporting on the war is "my duty".

Kuwait Times
2 days ago
- Politics
- Kuwait Times
‘My duty': Ukraine women reporting from the war front
KRAMATORSK: When Russia invaded Ukraine in early 2022, Olha Kyrylenko was at home watching images from her colleagues risking their lives to cover the siege of the port city of Mariupol. 'I asked myself whether I could work in such conditions at all,' said Kyrylenko, now a 26-year-old reporter for the leading media outlet Ukrainska Pravda. 'And I was like, well, I have to at least try,' she told AFP during a rare break in war-torn eastern Ukraine. She went to cover the front lines for the first time shortly after Russia invaded, and noticed that she was far from the only woman. 'All my friends, journalists working in the war, are women,' Kyrylenko said. While women journalists had already been covering fighting between Ukraine and Kremlin-backed separatists since 2014, a new generation emerged in 2022. 'Truth of this war' Mobilization in Ukraine's army is obligatory only for men, but the country has seen more and more women joining its ranks. Two Ukrainska Pravda journalists have been drafted into the army, including the photographer Kyrylenko worked with on her first reporting trip to the front in 2022. Since then, she has been working on her own. That was also the case for Viktoria Roshchyna, whose death in Russian detention last year highlighted the risks taken by Ukrainian journalists covering the war. The 27-year-old went missing in 2023 during a high-risk trip to territory occupied by Russian forces. Her body was sent back only in February and bearing signs of torture, according to a media investigation. Kyrylenko worked with Roshchyna and remembers her as 'tenacious' and ready to work where no one else would. But Kyrylenko said her death had forced her to think hard about whether journalism 'is worth risking your life'. In April, Kyrylenko was reporting in Pokrovsk, a vital front-line logistics hub where fighting is fierce, on her mother's birthday. She promised her mother that nothing would happen to her. But, she said, 'my life right now is not the highest value in my life'. The main thing is 'that my country as a country should survive and that the truth about this war, whatever it is, should be present in the information space'. 'A woman in this war' Keeping a professional distance as a Ukrainian journalist covering the war can be difficult. Alina Yevych, a 25-year-old reporter, said she had managed—for a while. Then she met a woman who said she had been kidnapped and raped for a week by Russian soldiers in Mariupol. After hearing her words, 'I don't know how to be objective', said Yevych, who works along with her boss Maria Davydenko for Vchasno, an independent news outlet. Yevych said soldiers they interview sometimes found it hard to believe that women could understand how tanks work or listen to their stories without flinching. Mentalities are changing, Yevych said, but 'for some people, you really remain a girl in this war'. 'Play Rambo' Vyacheslav Maryshev, editor in chief of the visuals department at Suspilne, a state-funded news organization, in Kharkiv in northeast Ukraine, said his female employees tended to take less unnecessary risks. The men sometimes want to act like 'Rambo' to prove their bravery, he said, but in his team of war reporters there are more women than men. One of them, Oleksandra Novosel, said she had just convinced her bosses to invest in bulletproof jackets more suited to women's body shapes. At the start of the invasion, one of the vests available at Suspilne weighed 12 kilograms - around a quarter of her weight. 'I walked around in it and wobbled,' Novosel recalled. The 30-year-old said she would prefer not to need a bulletproof vest, and had not imagined working in a war zone until her country became one. She would rather be covering courts or investigating corruption, she said, but for the moment, reporting on the war is 'my duty'. — AFP


Time of India
2 days ago
- Politics
- Time of India
Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns
HighlightsThe Cognition, Narrative and Culture Lab at Florida International University is developing artificial intelligence tools to detect disinformation campaigns that utilize narrative persuasion techniques. Disinformation, which is intentionally fabricated to mislead, differs from misinformation, and recent incidents like the manipulation of social media by foreign adversaries have highlighted the dangers of such tactics in influencing U.S. politics. AI systems are being trained to recognize cultural nuances and narrative structures, enabling better identification of disinformation that exploits symbols and sentiments within targeted communities. It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defences against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University , we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. Disinformation vs misinformation In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information - getting facts wrong - disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate US politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember - they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion - and, consequently, for spreading disinformation. A compelling narrative can override scepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Usernames, cultural context and narrative time Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share - they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection - one that considers not just what is said but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly - we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia , where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponises symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Who benefits from narrative-aware AI? Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be sceptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.


Time of India
3 days ago
- Politics
- Time of India
Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns
It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Save Big on Makeovers Kitchen Magic Get Quote Undo While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defences against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University , we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. Live Events Disinformation vs misinformation Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information - getting facts wrong - disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate US politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember - they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion - and, consequently, for spreading disinformation. A compelling narrative can override scepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Usernames, cultural context and narrative time Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share - they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection - one that considers not just what is said but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly - we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponises symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Who benefits from narrative-aware AI? Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be sceptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.


Time of India
6 days ago
- Politics
- Time of India
Prominent 'Time of Heroes' Russian deputy mayor killed in suspected bombing
A Russian deputy mayor and prominent veteran of Moscow's war in Ukraine was killed in a suspected bombing in southern Russia, officials confirmed on Thursday. The 34 year old Zaur Aleksandrovich Gurtsiev died in an early morning explosion on a street in Stavropol, along with another man. Tired of too many ads? go ad free now The blast was caused by a 'homemade explosive device,' as per investigators cited by CNN. Russia's Investigative Committee said 'As part of the investigation, the scene of the incident is being inspected, examinations are being ordered, and the necessary investigative actions are being carried out to establish all the circumstances of the incident.' Stavropol region governor Vladimir Vladimirov said on Telegram that 'all versions are being considered, including the organisation of a terrorist attack' possibly linked to Ukraine. Gurtsiev had risen to prominence through the Kremlin-backed 'Time of Heroes' programme, launched by President Vladimir Putin to promote war veterans into public office. According to the programme's website, Gurtsiev led aerial operations during Russia's controversial siege of Mariupol in 2022. 'He introduced his developments in the technology of targeting missiles, which allowed them to increase their accuracy and effectiveness many times over, including hitting the Azov supply base,' the site states. The capture of Mariupol followed an 86-day siege that left much of the city in ruins. UN estimates suggest 90% of residential buildings were either damaged or destroyed, with around 350,000 people forced to flee. Ukrainian officials claim up to 20,000 civilians were killed, though that figure remains unverified. Gurtsiev's death comes amid a string of attacks on Russian military figures inside the country. Last month, authorities arrested an alleged Ukrainian agent in connection with a car bomb that killed General Yaroslav Moskalik, a top officer in Russia's military command. In February, Armen Sarkisyan, a pro-Russian militia leader in eastern Ukraine, died in a bombing in central Moscow.