logo
#

Latest news with #MichalŠimečka

The deepfake era has only just started
The deepfake era has only just started

Winnipeg Free Press

time12 hours ago

  • Politics
  • Winnipeg Free Press

The deepfake era has only just started

Opinion Last month, Google released its newest content tool, Veo 3, powered by artificial intelligence (AI). 'We're entering a new era of creation with combined audio and video generation that's incredibly realistic,' declared Josh Woodward, the vice-president of Google Labs, the tech company's experimental division. And Google isn't alone. Synthetic media tools have existed for years. With each iteration, the technology unleashes new innovations and commercial possibilities. South Korean broadcasters use digital news anchors to deliver breaking stories more quickly. Hollywood uses AI for 'de-aging' older actors to play younger characters onscreen. Digital avatars allow customers to try on clothing virtually. British software firm Synthesia has helped thousands of multinational companies develop audiovisual training programs and communications materials reflecting the languages and ethnicities of workers across their supply chains. Or clients in different global regions. But AI deepfakes — digital forgeries created by machine learning models — are empowering bad actors too. Whether democratic societies are equipped to deal with the consequences remains an open question. Indeed, many are currently reeling from the corrosive effects of far cruder forms of disinformation. The only certainty going forward is deepfake tools will become more sophisticated and easier to use. Commonly available generative AI apps can already make real people appear to say or do things they never did — or render fake characters uncannily persuasive. To demonstrate, CBC News used Google's Veo 3 to create a hyper-realistic news segment about wildfires spreading in Alberta after entering just a one-sentence prompt. Deepfake scams are surging as well. Altered images and recordings of real people — often created using their own content uploaded to social media — are being used to dupe others into fake online romances or bogus investment deals. It just takes feeding a 30-second clip of someone's speech into generative AI to clone their voice. The political dangers and possibilities are frightening. In early October 2023, Michal Šimečka, a progressive leader vying to be Slovakia's prime minister, lost out to his pro-Kremlin opponent after a fake audio clip emerged online days before the ballot. In it, Šimečka supposedly suggests to a journalist that he'd consider buying votes to seal a victory. In Canada, a network of more than two dozen fake Facebook accounts tried to smear Prime Minister Mark Carney to users outside the country by running deepfake ads featuring Carney announcing dramatic new regulations shortly after winning election. In his latest book, Nexus, historian Yuval Noah Harari argues that all large democracies owe their successes to 'self-correction mechanisms.' This includes civil society, the media, the courts, opposition parties and institutional experts, among others. The caveat is each entity relies on modern information technologies. And to function, their actions must be based on information grounded in truth. The problem: today's tech giants have instead obsessed over capturing greater market share in the attention economy, prioritizing user engagement above all else. 'Instead of investing in self-correcting mechanisms that would reward truth telling, the social media giants actually developed unprecedented error-enhancing mechanisms that reward lies and fiction,' Harari writes. This pattern is now being repeated with AI. For example, just as Google released Veo 3, the founder of Telegram forged a new partnership with Elon Musk's AI company to integrate its Grok chatbot into Telegram's platform. However, Telegram is notoriously hands-off with moderation. It is a haven for extremists, grifters and nihilists. 'If Grok allows Telegram (users) to create more persuasive memes and other forms of propaganda at scale, that could make it an even more powerful tool for spreading toxicity, from disinformation to hate speech to other odious content,' warns Bloomberg tech columnist Parmy Olson. This is being further aggravated by partisan agendas in Washington. Republican lawmakers have inserted a stealth clause into their tax bill winding through Congress that, if passed, would ban states — including California, which has authority over Silicon Valley — from regulating AI for 10 years. Social polarization, foreign interference, fraud and personal revenge schemes will likely all worsen as deepfakes become indiscernible from reality, tearing at the fabric of liberal democracy. There is also another grim possibility. Rather than stoke outrage, tribalism, and conspiratorial thinking among voters, these new digital tools might soon breed something arguably much worse: apathy. Put off by civic life becoming awash with misinformation and deepfakes, an even larger portion of the electorate may eventually choose to avoid politics altogether. For them, the time, stress, and confusion involved in discerning fact from fiction won't be worth it. Especially not when AI elsewhere delivers instant, endless entertainment and escapism on demand — genuine or not. Kyle Hiebert is a Montreal-based political risk analyst and former deputy editor of the Africa Conflict Monitor.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store