Latest news with #KizunaAI


Observer
03-07-2025
- Observer
AI Anchors & Deepfake Dilemmas
What generative AI means for storytelling, newsroom ethics, and public trust In Japan, a moment of technological bravado turned into a reflection of society's unease about the future of information. In 2018, NHK, Japan's public broadcaster, introduced a collaboration with 'Kizuna AI,' an AI-generated news anchor designed to deliver updates seamlessly in multiple languages. The debut was streamed live to millions, and reaction was swift and divided. Some viewers marveled at the innovation, seeing it as a step forward; others felt unsettled, questioning whether they were watching genuine journalism or a synthetic facsimile. 'It felt surreal—I was watching a robot instead of a person,' said one viewer on social media. 'It made me wonder what the future holds for real news anchors.' This incident exemplifies a larger phenomenon: the gradual integration of artificial intelligence into newsrooms around the world—raising profound questions about authenticity, trust, and the very foundation of journalism. Artificial intelligence is no longer a speculative tool; it is actively reshaping how news is produced and delivered across continents. In South Korea, major broadcasters like KBS and MBC have piloted AI newsreaders, capable of delivering stories in multiple languages, 24/7, without the fatigue or bias that human anchors sometimes face. These systems analyse enormous datasets rapidly and generate scripts, allowing faster coverage of breaking news. In Russia, the state-funded Channel One launched a virtual presenter called 'Rossiya-24', which can deliver news with a natural voice and facial expressions similar to a human anchor. Meanwhile, in Europe, outlets like Deutsche Welle have begun experimenting with AI-assisted translation and summarisation tools, making content accessible to a broader audience. These initiatives are driven by practical benefits: reducing costs, increasing coverage speed, especially during crises, and personalising news. Yet, the ethical landscape is murky. As Dr. Sarah Thompson, media analyst at The Guardian, remarks, 'While AI can be a powerful tool, it risks depersonalising news, eroding verification standards, and raising questions about accountability—especially when AI-generated content goes viral and causes harm.' Deepfakes and the Misinformation Menace The public's response to AI in journalism is varied and complex. For some, these virtual anchors evoke admiration for technological progress and efficiency. A segment of the audience is intrigued by the innovation; they appreciate the novelty and convenience. However, many people express discomfort, distrust, and even disdain. 'It's unsettling,' confesses one social media user, a teacher from London. 'I don't feel like I'm getting honest news if I can't see a real person behind it. There's something about authenticity that's missing.' According to a recent survey by Pew Research Center, trust in traditional media is declining globally, and the rise of AI and deepfake content is only deepening scepticism. The survey uncovered that almost 60% of respondents in the U.S. think that intentionally manipulated media will become harder to detect in the next few years, leading to a 'crisis of credibility' for the news industry. Deepfake technology — highly realistic artificial videos and images manipulated via AI — has emerged as perhaps the greatest threat to the integrity of information. In early 2024, a manipulated video of a prominent political leader making inflammatory remarks went viral, causing widespread outrage before being conclusively debunked by fact-checkers. The damage extended beyond social media, influencing public opinion, and impacting political discourse. Professor Hany Farid, a leading computer science and digital forensics expert at UC Berkeley, warns, 'Deepfakes can be nearly impossible to distinguish from authentic footage. The societal risk is that once society loses faith in visual evidence, it becomes exceedingly difficult to trust any media source.' The BBC's recent investigation highlighted how deepfakes are increasingly being used to spread disinformation, sway elections, and manipulate stock markets. As the technology becomes more accessible, the challenge isn't just technical detection but societal resilience—how to safeguard truth amid a flood of synthetic content. The rising use of AI and deepfake technology prompts urgent questions about ethics, responsibility, and standards. When a news organization publishes AI-generated content, transparency becomes critical. Audiences need to know whether they are viewing a human or an artificial creation. Renowned media ethicist Robert Dreher, a professor at the University of Texas and author of Mass Media and Its Ethical Dilemmas, stated in a 2022 interview with The Atlantic, 'Transparency about the use of AI in newsrooms is not optional—it's essential. If audiences are kept in the dark, trust erodes rapidly, and the integrity of journalism is compromised.' His comments highlight the importance of clear disclosure to maintain credibility in a landscape saturated with synthetic content. Some outlets are investing in this future through innovation. The Associated Press, for example, uses AI to generate financial reports and sports summaries, freeing up journalists to focus on investigative stories and nuanced reporting. Similarly, the BBC recently developed AI-powered tools to help verify video content, reducing the spread of deepfake misinformation. Building Resilience: Education and Regulation The path forward requires a combination of technological innovation, legislative action, and public media literacy. Educating audiences about the existence of synthetic media and training them to recognise hallmarks of manipulation are crucial steps. Regulators are also stepping in. The European Union has proposed legislation requiring platforms to label deepfake videos and take responsibility for combating misinformation. Meanwhile, tech companies like Meta and Google are developing detection algorithms designed to fight AI-generated disinformation. The public's response to AI in journalism is varied and complex. For some, these virtual anchors evoke admiration for technological progress and efficiency. A segment of the audience is intrigued by the innovation; they appreciate the novelty and convenience. However, many people express discomfort, distrust, and even disdain. 'When I see a digital face delivering the news, I wonder if I can trust it,' said David Bromwich, a media scholar and professor at Yale University, in a 2023 interview with The New York Times. 'There's a human element missing, and that raises questions about authenticity and integrity.' According to a 2024 Pew Research Center survey, trust in traditional media remains fragile worldwide, and the rise of deepfake content and AI-generated news amplifies skepticism. Nearly 60% of respondents in the United States believe that manipulated media will become increasingly difficult to identify in the coming years, deepening fears about misinformation. Embracing Vigilance and Ethical Progress The rise of AI-driven news, virtual anchors, and deepfakes reveals the necessity for a cultural shift in how we consume and trust information. As Dr. Emily Carter of The Times notes, 'We need to develop a more sceptical, discerning public media literacy. Recognising that not everything we see or hear is real is now as essential as understanding effects of climate change.' This technological revolution forces us to confront uncomfortable truths: that much of what we may have taken for granted about the authenticity of news is shifting beneath our feet. Trust, once a given, now requires active safeguarding. The Japanese AI anchor incident exemplifies both the promise and peril of this new era. As AI continues to infiltrate every facet of media, it is increasingly vital for society—audiences, journalists, policymakers, and technologists—to work in concert. Only through transparency, regulation, and education can we hope to harness AI's potential for good without sacrificing the integrity of public discourse. The real story isn't just about the technology itself; it's about what kind of society we choose to build around it. Will we accept AI's marvels with open eyes, or fall prey to its manipulations? The choice is ours—but the stakes could not be higher.
Yahoo
01-03-2025
- Entertainment
- Yahoo
Les Copaque, Streamline Studios team up to create Upin & Ipin game
This week on our gaming recap, we have some Malaysian developments as well as e-sports titles getting their day in Olympic-calibre sun. The Big Picture Malaysian animation studio Les Copaque and game maker/asset-makers Streamline Studios are teaming up to create an Upin & Ipin open-world game. The game is slated to be out this year and will be for PC and consoles. Initial teasers and gameplay footage allude to the title being open-world and third-person, with action, platforming, and mini-games being the main activities. The title will retain its bright art style and core themes of family and adventure, meaning it will be an all-ages title. Short Beats Warner Bros. has cancelled its upcoming Wonder Woman game, as well as shut down its game studio Monolith Productions. The company has also shut down Play First Games (MultiVersus) and Warner Bros. Games San Diego. Xbox role-playing game Fable is being pushed back to 2026. The Olympic Council of Asia has announced the following games will be medalled events: Mobile Legends, Street Fighter 6, Tekken 8, The King of Fighters XV, Pokémon Unite, Honor of Kings, League of Legends, PUBG Mobile, Naraka: Bladepoint, Gran Turismo 7, eFootball, and Puyo Puyo Champions. Pocketpair, the company that made survival game Palworld, gave its developers the day off yesterday to play Monster Hunter Wilds. After a three-year hiatus, original VTuber Kizuna AI has returned but is now focused on doing music. Op-Eds We suggest some Monster Hunter clones to buy if you feel Monster Hunter Wilds is too expensive. Here are our thoughts on the recent Sonic Racing Crossworlds and Fatal Fury: City of the Wolves beta during the weekend of Feb 22. Games out this week Monster Hunter Wilds is the latest entry in the action RPG series where you team up with other hunters equipped with giant weapons as you kill giant monsters and beasts. You also team up with cat slaves called Palicos and take the spoils of hunted beasts to craft better weapons and gear. You do all of this either solo or with online friends as you uncover the connection between the people of the Forbidden Lands and the locales they inhabit. Ninja Five-O is a re-release of a classic 2D side-scroller action game originally on Game Boy Advance. As ninja Joe Osugi, you utilise a wide variety of masterful ninja skills to uphold justice by solving treacherous crimes such as bank heists and hijackings. Use unique ninjutsu moves to protect the city of Zipangu and take down the evil Mad Mask bosses. Yu-Gi-Oh! Early Days Collection is a collection of all 14 Yu-Gi-Oh! digital games from the era of the Game Boy, now for PC and Nintendo Switch. Recommended Viewing Here's a behind-the-scenes look at the making of the upcoming Terminator game Terminator 2D: No Fate by Bitmap Bureau. In the very rare instances of people doing their jobs instead of being shills, Kinda Funny Games put the spotlight on Jason Schreier breaking down the decline of Warner Bros.' games division.