logo
#

Latest news with #IvanShkvarun

How AI And OSINT Can Help Rebuild Trust In Media
How AI And OSINT Can Help Rebuild Trust In Media

Forbes

time09-07-2025

  • Business
  • Forbes

How AI And OSINT Can Help Rebuild Trust In Media

Ivan Shkvarun is the CEO and Co-founder of the data-driven investigation company Social Links (USA). Digital media is facing a trust crisis. According to the 2024 Edelman Trust Barometer, media ranks as the least trusted innovation-driving industry, and only 40% of people trust the news they consume. The World Economic Forum's 2024 Global Risks Report even ranked disinformation as the world's top risk in the coming two years. Ironically, the same technology that eroded digital trust—AI—is now being used with open-source intelligence (OSINT) to verify content and rebuild trust online. The Trust Crisis The role of AI in creating and distributing misinformation is hard to ignore. Recently, over 9,000 Facebook pages were removed after Australians lost $43.4 million to celebrity deepfakes. A few other high-profile incidents included AI-generated explicit images of Taylor Swift, a deep fake audio of London Mayor Sadiq Khan criticizing Armistice Day parades and a fake video call resulting in a finance worker transferring $25 million to scammers. Meanwhile, AI can replicate a personality with 85% accuracy in just two hours, raising concerns about digital identity manipulation. Social media platforms do not make the differentiation between a real and AI-generated character easier, either. For example, Meta recently rolled out AI products that let users create characters on Instagram and Facebook, expecting these AI characters to exist like human accounts. The result? Users are skeptical and disengage from platforms. News organizations fight an uphill battle against fake news. Businesses struggle with credibility. Governments face challenges to counteract digital deception and maintain democratic integrity. How AI And OSINT Can Restore Digital Trust Eliminating AI is not the only approach. Instead, it can be used to address the very misinformation it helps create. One way to do this is with OSINT tools. OSINT is a range of methods of gathering and analyzing publicly available data from sources like social media, websites, news reports and forums, aimed at separating facts from fakes. Here are five ways AI and OSINT can help rebuild trust in online platforms: AI can scan vast amounts of content in seconds, identifying misleading narratives before they go viral. OSINT tools look for trending topics, hashtags and unusual activity in real time to detect inconsistencies and provide users with context. When a claim does go viral, OSINT platforms can check it against trusted sources. They can also use reverse image search to see if a photo or video has been reused from an unrelated event. For that reason, OSINT technologies made their way into journalism—for example, major news outlets in Norway are already training journalists in OSINT. Deepfake technology is becoming scarily good. To expose deepfakes, advanced AI algorithms analyze video, audio and images to identify manipulated content through reverse image search, geolocation or shadow analysis. Blockchain-based verification also helps track origins and prevent false information. OSINT can help here by analyzing how accounts behave. As pointed out by DataJournalism, AI tools—along with OSINT verification practices—are a good starting point for detecting deepfakes. Considering the scale of content posted daily, human moderators can't keep up. AI-driven moderation tools can identify and remove hate speech, disinformation and violent imagery—without infringing on free speech. Many OSINT platforms let users set up alerts for specific keywords or topics. If an OSINT technology is built in the AI moderation system and misinformation spikes around a certain issue—like an election, health crisis or war conflict—analysts can act fast. In another example, the United Nations sees a potential of OSINT to monitor the web for keywords and phrases associated with trafficking and smuggling. Many AI systems work like "black boxes," making decisions without explaining how they got there. To rebuild trust, developers need to focus on explainable AI (XAI)—a way to show users why an AI flagged certain content. In the example of identifying DNS attacks, XAI and OSINT improve model explainability. OSINT tools can contribute to making AI more explainable by uncovering digital footprints that justify AI-driven decisions. If an AI-powered content moderating system detects a deepfake video, OSINT can trace the source, compare metadata and uncover inconsistencies, explaining the decision. AI's impact on digital trust isn't just a tech issue—it's a policy challenge. Together, governments, researchers and tech companies must set ethical standards (but not the rules) for how AI is built and used. AI ethics committees, independent audits and cross-platform verification systems can help hold AI accountable. By providing a "source of truth," OSINT technologies can support those initiatives in building digital trust. OSINT Limitations And Concerns OSINT is useful, but it comes with strings attached. A single tweet or forum post may be harmless, yet when combined with hundreds of data scrapes, it can form a dataset that feels like surveillance or a specific individual profile. Data noise and overload is another concern. Public feeds overflow with outdated links and hoax data. We've all seen the dramatic but fake 'war footage,' showing how easy it is to misinterpret—or weaponize public information. Without a habit of double-checking sources, analysts can end up spreading the very myths they set out to debunk—especially when recommendation engines keep shoving the same false story back in front of them. On the legal side, gray areas exist. Governments still argue over how much scraping or archiving is fair play, and even U.S. agencies have paused OSINT projects after civil-liberties pushback and lack of guidance. Privacy laws shift from one border to the next, complicating cross-border investigations, as what is legal in Berlin may lead to a lawsuit in Boston. Bottom line: Good OSINT is not plug-and-play. It needs scrapers, data dashboards, proxy networks—and people who know how to use them without breaking platform rules or local regulations. OSINT works best when paired with governance, audit trails and human oversight. Conclusion The future of digital trust depends on how we use AI—not whether we use it. Prioritizing ethical AI and transparent decision-making will be crucial in restoring trust in digital platforms. But if we succeed, we won't just restore trust, we'll set new standards. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store