logo
#

Latest news with #HealthPromotionInternational

The Spread Of Misinformation Is Getting Worse On Social Media
The Spread Of Misinformation Is Getting Worse On Social Media

Forbes

time4 days ago

  • Health
  • Forbes

The Spread Of Misinformation Is Getting Worse On Social Media

The spread of misinformation continues to increase, and it has been identified as a significant ... More threat to society and public health A March 2025 study published in the peer-reviewed journal Health Promotion International warned that the spread of misinformation continues to increase, and it has been identified as a significant threat to society and public health. Social media also enabled misinformation to have a global reach, the study's authors warned. "There are many interrelated causes of the misinformation problem, including the ability of non-experts to rapidly post information, the influence of bots and social media algorithms. Equally, the global nature of social media, limited commitment for action from social media giants, and rapid technological advancements hamper progress for improving information quality and accuracy in this setting," the study's abstract stated. This isn't good news, but it also shouldn't really be news. The problem of social media spreading misinformation has been known for years. "The cat is out of the bag on online misinformation," explained James Bailey, professor of business at The George Washington School of Business. "Yet good people continue to believe whatever they read in social media. It is not what they read that they believe, but what they read that they want to believe." Bailey creates this to the power of the written word, which he suggested is so powerful that even when one knows words might be false, incredulity and affirmation blur the cat wiggling out of the proverbial bag. The irony is that it is accepted that the tabloids found in the grocery store checkout aisle are often made-up or greatly exaggerated. The written word in digital form can make stories just as nonsensical as those in the tabloids seem suddenly credible. One issue is that "news" is shared by friends and colleagues, making it seem more plausible. Another concern is that there is no practical way to combat this type of misinformation. "Law enforcement, policy makers, higher education, and society have not designed any means to check the written words that promulgate misinformation," added Bailey. Photo manipulation isn't new, but until recently, it took some degree of skill to make it convincing. Now, AI can create photos of events almost instantly. These add another level by which misinformation can be communicated. "It's not a cat out of the bag, but a tiger," said Bailey. Digital manipulated photos and videos will make it even harder to know what is real. Factor in bots sharing these stories, and misinformation will spread at light speed. "AI-generated multimodal content, such as images, text, audio, video, and edited posts, poses an increasing threat to misinformation on social media. This content is more convincing and harder to detect," said Dr. Siyan Li, assistant professor in the Department of Mass Media at Southeast Missouri State University. "As AI technology advances, it can now create media that closely resembles authentic content." Just a few years ago, AI-generated images were easier to spot, such as those with humans displaying the wrong number of digits. We're no longer in the "uncanny valley," and AI can increasingly produce highly realistic videos that are difficult to distinguish from real footage. "The rise of user-friendly AI tools has lowered the cost and barriers to creating misinformation," added Li. "These tools are so simple that anyone, regardless of technical background, can generate misleading content quickly and with minimal effort. This widespread accessibility makes it much more difficult to detect and control the spread of AI-generated misinformation." This technology wasn't developed with such ill intentions. It was made to expand the realm of creative expression. It is how it is being used that should be of concern. "For sure, these technologies offer endless creative possibilities," said Bailey. "Short films with a touching story that could have never been made otherwise. Clever satire and comedy, obviously presented. Personal stories shared between friends." It is the potential for catastrophic communication employing AI to spread misinformation that should be top of mind. "I don't believe that AI itself is the problem – rather, it is how we choose to use it," suggested Wayne Hickman, assistant professor of educational leadership at Augusta University's College of Education and Human Development. Hickman acknowledged that AI-generated videos, manipulated images, and edited posts on social media are significantly amplifying the spread and impact of misinformation, as well as disinformation, especially in areas such as politics and public health. "AI tools are blurring the line between authentic and inauthentic content, making it increasingly difficult for users to distinguish fact from fiction, especially when content aligns with pre-existing beliefs or confirmation bias," he added. "When we consider political issues, AI-enhanced media often serves to polarize and inflame while simultaneously eroding public trust. Similarly, AI-enhanced falsehoods often endanger people by fueling confusion about public health issues, which in turn undermines efforts to protect community well-being." Although AI technologies can be used positively, their misuse, especially on social media, where they can quickly find an audience, poses a growing threat to informed discourse. "Biased or inaccurate training data can cause AI models to produce misleading or incorrect content, even when users have no intention of generating and spreading misinformation," said Li. "Therefore, it is urgent to explore strategies for mitigating AI-generated misinformation on social media at the user, platform, and government levels. Due to the different motivations and drives behind the creation and spread of AI-generated multimodal misinformation, analyzing and addressing each level separately may be more effective in combating the problem." The other issue is that it isn't just manipulated content that is a problem. Context can often be missing on social media, while human nature comes into play. An unedited video can be viewed in a very different light, and without all the facts, misinformation can easily spread. "The solution is going to require better detection and platform regulation, as well as public education – ensuring individuals can critically evaluate what they see and share online," suggested Hickman. "What do we do about this?" pondered Bailey. "Systems are being developed to expose this trickery, but they are years behind."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store