logo
#

Latest news with #Newsguard

Misinformation about LA Ice protests swirls online: ‘Catnip for rightwing agitators'
Misinformation about LA Ice protests swirls online: ‘Catnip for rightwing agitators'

Yahoo

time2 days ago

  • Politics
  • Yahoo

Misinformation about LA Ice protests swirls online: ‘Catnip for rightwing agitators'

Since protests against immigration raids in Los Angeles began, false and misleading claims about the ongoing demonstrations have spread on text-based social networks. Outright lies posted directly to social media mixed with misinformation spread through established channels by the White House as Donald Trump dramatically escalated federal intervention. The stream of undifferentiated real and fake information has painted a picture of the city that forks from reality. Parts of Los Angeles have seen major protests over the past four days against intensified immigration raids by the US president's administration. On Saturday, dramatic photos from downtown Los Angeles showed cars set aflame amid confrontations with law enforcement. Many posts promoted the perception that mayhem and violence had overtaken the entirety of Los Angeles, even though confrontations with law enforcement and vandalism remained confined to a small part of the sprawling city. Trump has deployed 2,000 members of the national guard to the city without requesting consent from California's governor, Gavin Newsom, which provoked the state to sue for an alleged violation of sovereignty. The defense secretary, Pete Hegseth, has also ordered the US military to deploy approximately 700 marines to the city. Related: Los Angeles braces for arrival of more troops in 'crisis of Trump's own making' Amid the street-level and legal conflicts, misinformation is proliferating. Though lies have long played a part in civil and military conflicts, social media often acts as an accelerant, with facts failing to spread as quickly as their counterparts, a dynamic that has played out with the recent wildfires in Los Angeles, a devastating hurricane in North Carolina and the coronavirus pandemic. Among the most egregious examples were conservative and pro-Russian accounts circulating a video of Mexican president, Claudia Sheinbaum, from before the protests with the claim that she incited and supported the protests, which have featured Mexican flags, according to the misinformation watchdog Newsguard. The misleading posts – made on Twitter/X by the conservative commentator Benny Johnson on pro-Trump sites such as or Russian state-owned sites such as – have received millions of views, according to the organization. Sheinbaum in fact told reporters on 9 June: 'We do not agree with violent actions as a form of protest … We call on the Mexican community to act pacifically.' Conspiratorial conservatives are grasping at familiar bogeymen. A post to X on Saturday claiming that 'Soros-funded organizations' had dropped off pallets of bricks near Immigration and Customs Enforcement (Ice) facilities received more than 9,500 retweets and was viewed more than 800,000 times. The Democratic mega-donor George Soros appears as a consistent specter in rightwing conspiracy theories, and the post likewise attributed the supply drop to LA's mayor, Karen Bass, and California governor, Gavin Newsom. 'It's Civil War!!' the post read. The photo of stacked bricks originates from a Malaysian construction supply company, and the hoax about bricks being supplied to protesters has spread repeatedly since the 2020 Black Lives Matter demonstrations in the US. X users appended a 'community note' fact-checking the tweet. X's native AI chatbot, Grok, also provided fact-checks when prompted to evaluate the veracity of the post. Related: 'The language of authoritarianism': how Trump and allies cast LA as a lawless city needing military intervention In response to the hoax photo, some X users replied with links to real footage from the protests that showed protesters hammering at concrete bollards, mixing false and true and reducing clarity around what was happening in reality. The independent journalist who posted the footage claimed the protesters were using the material as projectiles against police, though the footage did not show such actions. The Social Media Lab, a research unit out of Toronto Metropolitan University, posted on Bluesky: 'These days, it feels like every time there's a protest, the old clickbaity 'pallets of bricks' hoax shows up right on cue. You know the one, photos or videos of bricks supposedly left out to encourage rioting. It's catnip for right-wing agitators and grifters.' Trump himself has fed the narrative that the protests are inauthentic and larger than they really are, fueled by outside agitators without legitimate interest in local matters. 'These are not protesters, they are troublemakers and insurrectionists,' Trump posted to Truth Social, which was screenshotted and reposted to X by Elon Musk. Others in the administration have made similar points on social media. A reporter for the Los Angeles Times pointed out that the White House put out a statement about a particular Mexican national being arrested for allegedly assaulting an officer 'during the riots'. In fact, Customs and Border Protection agents stopped him before the protests began. Trump has increased the number of Ice raids across the country, which has stoked fears of deportations across Los Angeles, heavily populated with immigrants to the US. Per the Social Media Lab, anti-Ice posts also spread misinformation. One post on Bluesky, marked 'Breaking', claimed that federal agents had just arrived at an LA elementary school and tried to question first-graders. In fact, the event occurred two months ago. Researchers called the post 'rage-farming to push merch'. Related: LA protests: Trump sends thousands more troops in 'authoritarian' move The conspiratorial website InfoWars put out a broadcast on X titled: Watch Live: LA ICE Riots Spread To Major Cities Nationwide As Democrat Summer Of Rage Arrives, which attracted more than 40,000 simultaneous listeners when viewed by the Guardian on Tuesday morning. Though protests against deportations have occurred in other cities, the same level of chaos as seen in Los Angeles has not. A broadcast on X by the news outlet Reuters, Los Angeles after fourth night of immigration protests, had drawn just 13,000 viewers at the same time. The proliferation of misinformation degrades X's utility as a news source, though Musk continually tweets that it is the top news app in this country or that, most recently Qatar, a minor distinction. Old photos and videos mix with new and sow doubt in legitimate reporting. Since purchasing Twitter and renaming it X in late 2022, Musk has dismantled many of the company's own initiatives for combatting the proliferation of lies, though he has promoted the user-generated fact-checking feature, 'community notes'. During the 2024 US presidential election in particular, the X CEO himself became a hub for the spread of false information, say researchers. In his dozens of posts per day, he posted and reposted incorrect or misleading claims that reached about 2bn views, according to a report from the Center for Countering Digital Hate.

AI disinformation fears grow as summit debates new technology
AI disinformation fears grow as summit debates new technology

Yahoo

time21-02-2025

  • Politics
  • Yahoo

AI disinformation fears grow as summit debates new technology

As the summit opened, French President Emmanuel Macron cited a "need for rules" to govern artificial intelligence, in an apparent rebuff to US Vice President JD Vance, who had criticized excessive regulation. The debate came amid growing concerns on the rise of AI, from deepfakes aimed at influencing elections to chatbots that relay false answers. Analysts say AI has become a widely used tool in fueling disinformation. In early 2024, an alleged recording garnered significant attention: the leader of Slovakia's pro-European political party could be heard admitting the elections were going to be rigged. But, as confirmed by an AFP investigation, the audio was a deepfake, content rigged using AI to emulate real or non-existent people. During the 2024 US election campaign, then-president Joe Biden was seemingly heard calling New Hampshire voters and advising them not to vote in the primary election. But it was also a deepfake (archived here). All over the world, politicians are facing this type of manipulated content which can easily attract tens of thousands of interactions on social media. This was the case for Macron, whose voice was edited to make it seem as if he was announcing his resignation in a widely shared video with a doctored soundtrack. From US President Donald Trump to Russian President Vladimir Putin and Canadian Prime Minister Justin Trudeau to former prime minister of New Zealand Jacinda Ardern, AFP's global fact-checking team has verified numerous posts generated with AI. The Sunlight Project, a research group on misinformation, warned recently that all women are potentially vulnerable to pornographic deepfakes (archived here). This is particularly true for female politicians and AFP has identified elected officials in the United Kingdom, Italy, the United States and Pakistan who have been the targets of AI-generated pornographic images. Researchers fear this worrying trend may discourage women's public participation (archived here). Sexually explicit deepfakes also regularly target celebrities. This was the case for American superstar Taylor Swift in January 2024. One deepfake of the pop sensation was viewed 47 million times before being taken down (archived here). AI is also responsible for large-scale digital interference operations, or attempts to influence public opinion, usually through social media. Pro-Russian disinformation campaigns known as Doppelgänger, Matriochka and CopyCop are some of the most high-profile examples. Their creators have used fake profiles, or bots, to publish AI-generated content, with the goal of diminishing Western support for Ukraine. "What's new is the scale and ease with which someone with very few financial resources and time can disseminate false content that, on the other hand, appears increasingly credible and is increasingly difficult to detect," said Chine Labbé, editor-in-chief of Newsguard, an organization which analyzes the reliability of online sites and content (archived here). The rise of groups using AI content to shape public opinion is particularly worrying in countries experiencing conflict. An AFP investigation revealed that supporters of warring factions in Ethiopia take advantage of ethnic divisions within the country, as well as poor media literacy skills to spread AI-generated disinformation which runs the risk of renewing violence. The US Federal Bureau of Investigation warned in a May 2024 report that "AI provides augmented and enhanced capabilities to schemes that attackers already use and increases cyber-attack speed, scale, and automation" (archived here). According to FBI Special Agent in Charge Robert Tripp: "As technology continues to evolve, so do cybercriminals' tactics." No sector is immune to the risk posed by AI: fake music videos are often spread online, as are fabricated photos of historical events, generated in just a few clicks (archived here). As early as 2022, an alleged song by American rapper Eminem in which he seemingly criticized the former president of Mexico was mistaken for a real song by thousands of internet users. In reality, the audio was generated using AI software. In 2024, AFP investigated what appeared to be an image of the rarely photographed French painter Arthur Rimbaud from 1873. However, the photo was digitally generated. The image's author told AFP that the photo is an imagined moment, meant to resemble a scene that could potentially have occurred. Some individuals also use AI to generate online engagement. On Facebook, accounts will increasingly publish evocative AI images, not necessarily to circulate false information, but rather to capture users' attention for commercial purposes or even to identify gullible individuals for future scams. At the end of December, when the story of a man who set fire to a woman in the New York subway was making headlines, an alleged photo of the victim spread rapidly on social media. In reality, it was AI-generated with the aim of directing social media users to a cryptocurrency seeking to capitalize on the tragedy. Another trend involves deepfakes of well-known doctors. Retired American neurosurgeon Ben Carson has repeatedly featured in deepfakes, in which he seemingly promotes unfounded medical cures for diseases ranging from high blood pressure to dementia. "Beyond the risk of misinformation, there's also the risk of web pollution: you never know whether you're dealing with content that has been verified and edited by a thorough human being, or whether it's generated by AI with no concern for truthfulness and accuracy," Newsguard's Labbé said. After major news events, a flood of AI images is generated. Numerous dubious posts capitalized on the massive fires in Los Angeles in early 2025 to spread fake photos including those of the "Hollywood" sign in flames and an Oscar amid ashes that were shared around the world. Popular chatbots, such as ChatGPT, an American AI company, can also propagate false claims. In fact, "they tend to quote AI-generated sources first, so it's a case of the snake biting its own tail," Labbé said. Research published in early February by Newsguard also shows that these chatbots typically generate more disinformation in languages such as Russian and Chinese, as they are more susceptible to state propaganda discourse (archived link here). The growing popularity of the Chinese tool DeepSeek has raised national security concerns. Its tendency to repeat official Chinese stances in its responses to user questions (archived here) further shows the need to impose safety frameworks on AI. Labbé says one solution would be to teach chatbots "to recognize reliable sources from propaganda sources." AFP has also identified false claims spread by answers generated from unreliable sources by Amazon's Alexa and misinterpretations of Google's AI search summary.

AI disinformation fears grow as summit debates new technology
AI disinformation fears grow as summit debates new technology

AFP

time21-02-2025

  • Politics
  • AFP

AI disinformation fears grow as summit debates new technology

The debate came amid growing concerns on the rise of AI, from deepfakes aimed at influencing elections to chatbots that relay false answers. Analysts say AI has become a widely used tool in fueling disinformation. In early 2024, an alleged recording garnered significant attention: the leader of Slovakia's pro-European political party could be heard admitting the elections were going to be rigged. But, as confirmed by an , the audio was a deepfake, content rigged using AI to emulate real or non-existent e. During the 2024 US election campaign, then-president Joe Biden was seemingly heard calling New Hampshire voters and advising them not to vote in the primary election. But it was also a deepfake (archived here). All over the world, politicians are facing this type of manipulated content which can easily attract tens of thousands of interactions on social media. This was the case for Macron, whose voice was edited to make it seem as if he was announcing his resignation in a widely shared video with a doctored soundtrack. From US President Donald Trump to Russian President Vladimir Putin and Canadian Prime Minister Justin Trudeau to former prime minister of New Zealand Jacinda Ardern, AFP's global fact-checking team has verified numerous posts generated with AI. Pornographic Deepfakes The , a research group on misinformation, warned recently that all women are potentially vulnerable to pornographic deepfakes (archived here). This is particularly true for female politicians and AFP has identified elected officials in the United Kingdom, Italy, the United States and Pakistan who have been the targets of AI-generated pornographic images. Researchers fear this worrying trend may discourage women's public participation (archived here). Sexually explicit deepfakes also regularly target celebrities. This was the case for American superstar Taylor Swift in January 2024. One deepfake of the pop sensation was viewed 47 million times before being taken down (archived here). Disinformation operations AI is also responsible for large-scale digital interference operations, or attempts to influence public opinion, usually through social media. Pro-Russian disinformation campaigns known as Doppelgänger, Matriochka and CopyCop are some of the most high-profile examples. Their creators have used fake profiles, or bots, to publish AI-generated content, with the goal of diminishing Western support for Ukraine. "What's new is the scale and ease with which someone with very few financial resources and time can disseminate false content that, on the other hand, appears increasingly credible and is increasingly difficult to detect," said Chine Labbé, editor-in-chief of Newsguard, an organization which analyzes the reliability of online sites and content (archived here). The rise of groups using AI content to shape public opinion is particularly worrying in countries experiencing conflict. An AFP investigation revealed that supporters of warring factions in Ethiopia take advantage of ethnic divisions within the country, as well as poor media literacy skills to spread AI-generated disinformation which runs the risk of renewing violence. The US Federal Bureau of Investigation warned in a May 2024 report that "AI provides augmented and enhanced capabilities to schemes that attackers already use and increases cyber-attack speed, scale, and automation" (archived here). According to FBI Special Agent in Charge Robert Tripp: "As technology continues to evolve, so do cybercriminals' tactics." 'Web Pollution' No sector is immune to the risk posed by AI: fake music videos are often spread online, as are fabricated photos of historical events, generated in just a few clicks (archived here). As early as 2022, an alleged song by American rapper Eminem in which he seemingly criticized the former president of Mexico was mistaken for a real song by thousands of internet users. In reality, the audio was generated using AI software. In 2024, AFP investigated what appeared to be an image of the rarely photographed French painter Arthur Rimbaud from 1873. However, the photo was digitally generated. The image's author told AFP that the photo is an imagined moment, meant to resemble a scene that could potentially have occurred. Some individuals also use AI to generate online engagement. On Facebook, accounts will increasingly publish evocative AI images, not necessarily to circulate false information, but rather to capture users' attention for commercial purposes or even to identify gullible individuals for future At the end of December, when the story of a man who set fire to a woman in the New York subway was making headlines, an alleged photo of the victim spread rapidly on social media. In reality, it was AI-generated with the aim of directing social media users to a cryptocurrency seeking to capitalize on the tragedy. Another trend involves deepfakes of well-known doctors. Retired American neurosurgeon Ben Carson has repeatedly featured in deepfakes, in which he seemingly promotes unfounded medical cures for diseases ranging from high blood pressure to dementia. "Beyond the risk of misinformation, there's also the risk of web pollution: you never know whether you're dealing with content that has been verified and edited by a thorough human being, or whether it's generated by AI with no concern for truthfulness and accuracy," Labbé said. r major news events, a flood of AI images is generated. Numerous dubious posts capitalized on the massive fires in Los Angeles in early 2025 to spread fake photos including those of the "Hollywood" sign in flames and an Oscar amid ashes that were shared around the world. Unreliable answers Popular chatbots, such as ChatGPT, an American AI company, can also propagate false claims. In fact, "they tend to quote AI-generated sources first, so it's a case of the snake biting its own tail," Labbé said. Image A photo illustration of the logos de DeepSeek and ChatGPT (AFP / Lionel BONAVENTURE) Research published in early February by Newsguard also shows that these chatbots typically generate more disinformation in languages such as Russian and Chinese, as they are more susceptible to state propaganda discourse (archived link here). The growing popularity of the Chinese tool DeepSeek has raised national security concerns. Its tendency to repeat official Chinese stances in its responses to user questions (archived here) further shows the need to impose safety frameworks on AI. Labbé says one solution would be to teach chatbots "to recognize reliable sources from propaganda sources." also identified false claims spread by answers generated from unreliable sources by Amazon's Alexa and misinterpretations of Google's AI search summary.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store