logo
#

Latest news with #ArchivalProducersAlliance

The LA Times published an op-ed warning of AI's dangers. It also published its AI tool's reply
The LA Times published an op-ed warning of AI's dangers. It also published its AI tool's reply

The Guardian

time04-03-2025

  • The Guardian

The LA Times published an op-ed warning of AI's dangers. It also published its AI tool's reply

Beneath a recent Los Angeles Times opinion piece about the dangers of artificial intelligence, there is now an AI-generated response about how AI will make storytelling more democratic. 'Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon,' the co-directors of the Archival Producers Alliance, Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli, wrote on 1 March. Published over the Academy Awards weekend, their comment piece focused on the specific dangers of AI-generated footage within documentary film, and the possibility that unregulated use of AI could shatter viewers' 'faith in the veracity of visuals'. On Monday, the Los Angeles Times's just-debuted AI tool, 'Insight', labeled this argument as politically 'center-left' and provided four 'different views on the topic' underneath. These new AI-generated responses, which are not reviewed by Los Angeles Times journalists before they are published, are designed to provide 'voice and perspective from all sides,' the paper's billionaire owner, Dr Patrick Soon-Shiong, wrote on X on Monday. 'No more echo chamber.' Now, a published criticism of AI on the LA Times's website is followed by an artificially generated defense of AI – in this case, a lengthy one, running more than 150 words. Responding to the human writers, the AI tool argued not only that AI 'democratizes historical storytelling', but also that 'technological advancements can coexist with safeguards' and that 'regulation risks stifling innovation'. 'Proponents argue AI's potential for artistic expression and education outweighs its misuse risks, provided users maintain critical awareness,' the generated text reads. Antell, Jenkins and Petrucell declined to comment on the AI response to their opinion piece. The 'different views' on LA Times opinion pieces are AI-generated in partnership with Perplexity, an AI company, according to the LA Times, while the 'viewpoint analysis' of the piece as 'Left, Center Left, Center, Center Right or Right' is generated in partnership with Particle News, the Los Angeles Times said. While Soon-Shiong argued on Monday that the AI-generated content beneath Los Angeles Times's opinion pieces 'supports our journalistic mission and will help readers navigate the issues facing this nation', the union that represents his paper's journalists take a different view. While the paper's journalists support efforts to improve news literacy and to distinguish news from opinion, 'we don't think this approach – AI-generated analysis unvetted by editorial staff – will do much to enhance trust in the media,' Matt Hamilton, the vice-chair of LA Times Guild, said in a statement on Monday. 'Quite the contrary, this tool risks further eroding confidence in the news.' The AI tool is only providing its extra commentary on a range of opinion pieces, not on the paper's news reporting, the Los Angeles Times said. Most of the time, of course, the newspaper's AI tool will not provide an AI's response to arguments about artificial intelligence. Instead, as in several recent opinion pieces, the AI 'Insights' button provides pro-Trump responses to opinion pieces critical of Donald Trump.

Opinion: When unregulated AI re-creates the past, we can't trust that the 'historical' is real
Opinion: When unregulated AI re-creates the past, we can't trust that the 'historical' is real

Yahoo

time01-03-2025

  • Entertainment
  • Yahoo

Opinion: When unregulated AI re-creates the past, we can't trust that the 'historical' is real

A furious political leader shouting a message of hate to an adoring audience. A child crying over the massacre of her family. Emaciated men in prison uniforms, starved to the edge of death because of their identities. As you read each sentence, specific imagery likely appears in your mind, seared in your memory and our collective consciousness through documentaries and textbooks, news media and museum visits. We understand the significance of important historical images like these — images that we must learn from in order to move forward — in large part because they captured something true about the world when we weren't around to see it with our own eyes. Read more: Opinion: An FAQ from the future — how we struggled and defeated deepfakes As archival producers for documentary films and co-directors of the Archival Producers Alliance, we are deeply concerned about what could happen when we can no longer trust that such images reflect reality. And we're not the only ones: In advance of this year's Oscars, Variety reported that the Motion Picture Academy is considering requiring contenders to disclose the use of generative AI. While such disclosure may be important for feature films, it is clearly crucial for documentaries. In the spring of 2023, we began to see synthetic images and audio used in the historical documentaries we were working on. With no standards in place for transparency, we fear this commingling of real and unreal could compromise the nonfiction genre and the indispensable role it plays in our shared history. Read more: Opinion: If your phone had feelings would you treat it differently? It could happen sooner than you think In February 2024, OpenAI previewed its new text-to-video platform, Sora, with a clip called 'Historical footage of California during the Gold Rush.' The video was convincing: A flowing stream filled with the promise of riches. A blue sky and rolling hills. A thriving town. Men on horseback. It looked like a western where the good guy wins and rides off into the sunset. It looked authentic, but it was fake. OpenAI presented 'Historical Footage of California During the Gold Rush' to demonstrate how Sora, officially released in December 2024, creates videos based on user prompts using AI that 'understands and simulates reality.' But that clip is not reality. It is a haphazard blend of imagery both real and imagined by Hollywood, along with the industry's and archives' historical biases. Sora, like other generative AI programs such as Runway and Luma Dream Machine, scrapes content from the internet and other digital material. As a result, these platforms are simply recycling the limitations of online media, and no doubt amplifying the biases. Yet watching it, we understand how an audience might be fooled. Cinema is powerful that way. Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon. If our faith in the veracity of visuals is shattered, powerful and important films could lose their claim on the truth, even if they don't use AI-generated material. Read more: Opinion: California and other states are rushing to regulate AI. This is what they're missing Transparency, something akin to the food labeling that informs consumers about what goes into the things they eat, could be a small step forward. But no regulation of AI disclosure appears to be over the next hill, coming to rescue us. Generative AI companies promise a world where anyone can create audio-visual material. This is deeply concerning when it's applied to representations of history. The proliferation of synthetic images makes the job of documentarians and researchers — safeguarding the integrity of primary source material, digging through archives, presenting history accurately — even more urgent. It's human work that cannot be replicated or replaced. One only needs to look to this year's Oscar-nominated documentary 'Sugarcane' to see the power of careful research, accurate archival imagery and well-reported personal narrative to expose hidden histories, in this case about the abuse of First Nations children in Canadian residential schools. Read more: How a pair of acclaimed documentaries tackle the legacies of colonialism The speed with which new AI models are being released and new content is being produced makes the technology impossible to ignore. While it can be fun to use these tools to imagine and test, what results is not a true work of documentation — of humans bearing witness. It's only a remix. In response, we need robust AI media literacy for our industry and the general public. At the Archival Producers Alliance, we've published a set of guidelines — endorsed by more than 50 industry organizations — for the responsible use of generative AI in documentary film, practices that our colleagues are beginning to integrate into their work. We've also put out a call for case studies of AI use in documentary film. Our aim is to help the film industry ensure that documentaries will deserve that title and that the collective memory they inform will be protected. We are not living in a classic western; no one is coming to save us from the threat of unregulated generative AI. We must work individually and together to preserve the integrity and diverse perspectives of our real history. Accurate visual records not only document what happened in the past, they help us understand it, learn its details and — maybe most importantly in this historical moment — believe it. When we can no longer accurately witness the highs and lows of what came before, the future we share may turn out to be little more than a haphazard remix, too. Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli are co-directors of the Archival Producers Alliance. If it's in the news right now, the L.A. Times' Opinion section covers it. Sign up for our weekly opinion newsletter. This story originally appeared in Los Angeles Times.

Opinion:  When unregulated AI re-creates the past, we can't trust that the ‘historical' is real
Opinion:  When unregulated AI re-creates the past, we can't trust that the ‘historical' is real

Los Angeles Times

time01-03-2025

  • Entertainment
  • Los Angeles Times

Opinion:  When unregulated AI re-creates the past, we can't trust that the ‘historical' is real

A furious political leader shouting a message of hate to an adoring audience. A child crying over the massacre of her family. Emaciated men in prison uniforms, starved to the edge of death because of their identities. As you read each sentence, specific imagery likely appears in your mind, seared in your memory and our collective consciousness through documentaries and textbooks, news media and museum visits. We understand the significance of important historical images like these — images that we must learn from in order to move forward — in large part because they captured something true about the world when we weren't around to see it with our own eyes. As archival producers for documentary films and co-directors of the Archival Producers Alliance, we are deeply concerned about what could happen when we can no longer trust that such images reflect reality. And we're not the only ones: In advance of this year's Oscars, Variety reported that the Motion Picture Academy is considering requiring contenders to disclose the use of generative AI. While such disclosure may be important for feature films, it is clearly crucial for documentaries. In the spring of 2023, we began to see synthetic images and audio used in the historical documentaries we were working on. With no standards in place for transparency, we fear this commingling of real and unreal could compromise the nonfiction genre and the indispensable role it plays in our shared history. In February 2024, OpenAI previewed its new text-to-video platform, Sora, with a clip called 'Historical footage of California during the Gold Rush.' The video was convincing: A flowing stream filled with the promise of riches. A blue sky and rolling hills. A thriving town. Men on horseback. It looked like a western where the good guy wins and rides off into the sunset. It looked authentic, but it was fake. OpenAI presented 'Historical Footage of California During the Gold Rush' to demonstrate how Sora, officially released in December 2024, creates videos based on user prompts using AI that 'understands and simulates reality.' But that clip is not reality. It is a haphazard blend of imagery both real and imagined by Hollywood, along with the industry's and archives' historical biases. Sora, like other generative AI programs such as Runway and Luma Dream Machine, scrapes content from the internet and other digital material. As a result, these platforms are simply recycling the limitations of online media, and no doubt amplifying the biases. Yet watching it, we understand how an audience might be fooled. Cinema is powerful that way. Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon. If our faith in the veracity of visuals is shattered, powerful and important films could lose their claim on the truth, even if they don't use AI-generated material. Transparency, something akin to the food labeling that informs consumers about what goes into the things they eat, could be a small step forward. But no regulation of AI disclosure appears to be over the next hill, coming to rescue us. Generative AI companies promise a world where anyone can create audio-visual material. This is deeply concerning when it's applied to representations of history. The proliferation of synthetic images makes the job of documentarians and researchers — safeguarding the integrity of primary source material, digging through archives, presenting history accurately — even more urgent. It's human work that cannot be replicated or replaced. One only needs to look to this year's Oscar-nominated documentary 'Sugarcane' to see the power of careful research, accurate archival imagery and well-reported personal narrative to expose hidden histories, in this case about the abuse of First Nations children in Canadian residential schools. The speed with which new AI models are being released and new content is being produced makes the technology impossible to ignore. While it can be fun to use these tools to imagine and test, what results is not a true work of documentation — of humans bearing witness. It's only a remix. In response, we need robust AI media literacy for our industry and the general public. At the Archival Producers Alliance, we've published a set of guidelines — endorsed by more than 50 industry organizations — for the responsible use of generative AI in documentary film, practices that our colleagues are beginning to integrate into their work. We've also put out a call for case studies of AI use in documentary film. Our aim is to help the film industry ensure that documentaries will deserve that title and that the collective memory they inform will be protected. We are not living in a classic western; no one is coming to save us from the threat of unregulated generative AI. We must work individually and together to preserve the integrity and diverse perspectives of our real history. Accurate visual records not only document what happened in the past, they help us understand it, learn its details and — maybe most importantly in this historical moment — believe it. When we can no longer accurately witness the highs and lows of what came before, the future we share may turn out to be little more than a haphazard remix, too. Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli are co-directors of the Archival Producers Alliance.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store