
The LA Times published an op-ed warning of AI's dangers. It also published its AI tool's reply
'Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon,' the co-directors of the Archival Producers Alliance, Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli, wrote on 1 March.
Published over the Academy Awards weekend, their comment piece focused on the specific dangers of AI-generated footage within documentary film, and the possibility that unregulated use of AI could shatter viewers' 'faith in the veracity of visuals'.
On Monday, the Los Angeles Times's just-debuted AI tool, 'Insight', labeled this argument as politically 'center-left' and provided four 'different views on the topic' underneath.
These new AI-generated responses, which are not reviewed by Los Angeles Times journalists before they are published, are designed to provide 'voice and perspective from all sides,' the paper's billionaire owner, Dr Patrick Soon-Shiong, wrote on X on Monday. 'No more echo chamber.'
Now, a published criticism of AI on the LA Times's website is followed by an artificially generated defense of AI – in this case, a lengthy one, running more than 150 words.
Responding to the human writers, the AI tool argued not only that AI 'democratizes historical storytelling', but also that 'technological advancements can coexist with safeguards' and that 'regulation risks stifling innovation'.
'Proponents argue AI's potential for artistic expression and education outweighs its misuse risks, provided users maintain critical awareness,' the generated text reads.
Antell, Jenkins and Petrucell declined to comment on the AI response to their opinion piece.
The 'different views' on LA Times opinion pieces are AI-generated in partnership with Perplexity, an AI company, according to the LA Times, while the 'viewpoint analysis' of the piece as 'Left, Center Left, Center, Center Right or Right' is generated in partnership with Particle News, the Los Angeles Times said.
While Soon-Shiong argued on Monday that the AI-generated content beneath Los Angeles Times's opinion pieces 'supports our journalistic mission and will help readers navigate the issues facing this nation', the union that represents his paper's journalists take a different view.
While the paper's journalists support efforts to improve news literacy and to distinguish news from opinion, 'we don't think this approach – AI-generated analysis unvetted by editorial staff – will do much to enhance trust in the media,' Matt Hamilton, the vice-chair of LA Times Guild, said in a statement on Monday. 'Quite the contrary, this tool risks further eroding confidence in the news.'
The AI tool is only providing its extra commentary on a range of opinion pieces, not on the paper's news reporting, the Los Angeles Times said.
Most of the time, of course, the newspaper's AI tool will not provide an AI's response to arguments about artificial intelligence. Instead, as in several recent opinion pieces, the AI 'Insights' button provides pro-Trump responses to opinion pieces critical of Donald Trump.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Herald Scotland
30-05-2025
- The Herald Scotland
Will AI doom the last of us? As a writer, I don't feel safe
But I have a more down-to-earth worry: How much longer will I have a job as a writer, which I feel lucky to hold as my vocation? AI seemed to happen gradually, then suddenly (to quote Ernest Hemingway, one of my favorite human authors). In recent months, I've noticed that no matter what I'm doing online - writing a column in Google Docs, an email in Outlook, a note to a friend on Instagram - an AI bot will pop in to ask if I would like "help" crafting my message. As someone who makes my living with words and enjoys using them, I find AI's uninvited intrusions into my day not just annoying, but alarming. I'll admit, as an opinion columnist, I had thought that my skills were safe from robot replacement - at least in my lifetime. Aren't reason and persuasion uniquely human abilities? What does it mean if they aren't? Opinion: Dems are mad about Biden book. Jake Tapper must be a deep undercover MAGA agent. Hot takes from an artificial 'mind'? No thanks. Lest you think I'm overreacting, real-world newspapers in the United States already are turning to AI to craft news and opinion for their readers. For instance, the Los Angeles Times has started offering online readers the option to read AI-generated counterpoints to the opinion columns it runs. The "Insights" feature judges the piece's point of view and then spits out an opposing argument. That was the brainchild of Times owner Dr. Patrick Soon-Shiong, who wants the opinion pages to reflect a broader range of views. "If you just have the one side, it's just going to be an echo chamber," Soon-Shiong told Fox News about the project, saying he thinks it's important "that all voices be heard." Now, as a conservative writer in a liberal media world, I respect Soon-Shiong's quest for balance. And I appreciate that my newspaper has crafted an opinion team with many different points of view - something that wasn't always the case. But robots? Is that where we want other perspectives to come from? Opinion: NPR has a right to exist. That doesn't mean it has a right to my tax dollars. The Times, as you might expect, has already run into some awkward AI offerings. It isn't alone, though. This month, several newspapers, including the Chicago Sun-Times and The Philadelphia Inquirer, ran a syndicated summer book list that included books ... that don't exist. Only five of the 15 recommended books were real, although I'll acknowledge that some of the made-up titles sounded interesting. The writer who "created" the list has admitted to using AI, and has since been fired. The idea of news and commentary coming from a bot is unsettling, and not just because of my job security. Plenty of Americans feel the same way. New research from the Poynter Institute and the University of Minnesota found nearly half of those surveyed don't want news from generative artificial intelligence. And 20% said the media shouldn't use AI at all. I decided to take AI for a test drive. What do you think of the result? Given my fear of being replaced by a robot, I've stayed away from sites like ChatGPT, although I've received many examples from friends and family of what it can do. But I decided to give it a simple prompt: "Write a column in the style of Ingrid Jacques of USA TODAY." Opinion: Not funny, John Oliver - Standing for religious liberty doesn't equate to 'hate.' In mere seconds, it "wrote" a 675-word column, complete with a headline and byline. I'll share the first few lines of what popped up: The Uncomfortable Truth About Wokeism By Ingrid Jacques The American cultural landscape is being shaped by a force so pervasive, so influential, that it seems to touch every corner of our lives. I'm not talking about the rise of artificial intelligence, nor the latest social media craze. No, I'm referring to the ideology of "wokeism," which, like a slow-moving storm, is now fully entrenched in our schools, workplaces, and public discourse. Not bad, right? And I thought the nod to AI was apropos, given the topic of this column. It's still creepy, though. Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store. With the current political divisions in our country, I believe that sharing different points of view in a civil way is more important than ever, and I worry about the implications if those "thoughts" are coming from something nonhuman. So, while this may come as a disappointment to some of you, you're stuck with the "real" me. For now. Ingrid Jacques is a columnist at USA TODAY. Contact her at ijacques@ or on X: @Ingrid_Jacques


Daily Mirror
22-04-2025
- Daily Mirror
The Oscars introduce new rule which allow AI-generated films for big awards
The Academy Awards have introduced a few new changes, including allowing films made with the help of artificial intelligence to win big awards The Academy Awards have introduced a new change that allows films made with the help of artificial intelligence to win massive awards. The use of AI in movies has already been a controversial topic as The Brutalist received backlash after the movie's editor, Dávid Jancsó, revealed AI was used to create a more convincing Hungarian accent. It still went on to win Best Picture, Best Actor, and Best Music at this year's awards show. And now, the Academy of Motion Picture Arts and Sciences confirmed that movies using AI tools will be able to qualify for awards. According to the rules, AI use won't automatically boost or reduce a film's chance of getting a nomination. The Academy said the most important part is the percentage of human creativity involved in the entire process, which means the AI technology can only assist in the project but not be a huge part of the storytelling or take over the entire thing. Emilia Perez, who won Zoe Saldaña an accolade for Best Supporting Actress, also used voice-enhancing software for its musical numbers. Meanwhile, The Brutalist's film editor Dávid Jancsó revealed in a previous interview with Red Shark News the ways in which the movie's film team used A.I., and why they initially implemented it. It utilised artificial intelligence to fill in minor language gaps coming from Adrien Brody and his co-star Felicity Jones during a distinct part of the movie. "I am a native Hungarian speaker and I know that it is one of the most difficult languages to learn to pronounce,' Dávid told the news outlet. 'It's an extremely unique language." For a few minutes in the movie, a letter from Adrien's character he had written for his wife is read out loud. His character's letter is read in Hungarian. According to TheWrap, this was the only part of the Adrien's performance that Respeecher was used for. 'If you're coming from the Anglo-Saxon world certain sounds can be particularly hard to grasp,' Dávid explained. 'We first tried to ADR these harder elements with the actors. "Then we tried to ADR them completely with other actors but that just didn't work. So we looked for other options of how to enhance it." The production team used the actor's voices on Respeecher and added in AI words in Hungarian. 'Most of their Hungarian dialogue has a part of me talking in there. We were very careful about keeping their performances,' he continued. 'It's mainly just replacing letters here and there. You can do this in ProTools yourself, but we had so much dialogue in Hungarian that we really needed to speed up the process otherwise we'd still be in post.'


The Guardian
05-03-2025
- The Guardian
The LA Times' AI ‘bias meter' looks like a bid to please Donald Trump
The past few months have been brutal ones for the readers and journalists of the largest news organization in California, the Los Angeles Times. Since he bought the paper in 2018, the billionaire and medical entrepreneur Patrick Soon-Shiong has become something of a Donald Trump acolyte. That's his right. Many media owners have political views; but the best keep those views to themselves, or at least allow their news organizations to exercise editorial freedom. But Soon-Shiong, who took over promising to steady the ship and return it to financial health, has turned out to be a deeply flawed leader. You might recall that many longtime subscribers canceled their subscriptions months ago when Soon-Shiong blocked his editorial board's decision to endorse Kamala Harris for president. Then he reportedly told his editorial board to 'take a break' from writing about Trump, and, according to a staff memo signed by members of the opinion section, instituted a policy in which articles critical of the newly elected president were to be published side-by-side with the opposing, pro-Trump, view. That's straight-up meddling. But now, he's taken a more public-facing step by inflicting what's become known as a 'bias meter' on some LA Times opinion pieces. Its findings are generated by artificial intelligence, without human intervention or review. If there's one firm rule about the use of AI in journalism, it's this: there should always be a 'human in the loop' before publication. Why? Because AI, at least at this point, is often wrong on the facts, and because many news consumers are suspicious of it. At the LA Times, the AI-powered 'Insights' feature evaluates opinion articles and puts a label on them – for example, 'center left.' Then it provides 'different views.' Articles about Trump-related policies have gotten the bias meter treatment – for example, an opinion piece on Ukraine that stated that 'Trump is surrendering a century's worth of US global power in a matter of weeks.' According to the Guardian's Lois Beckett, that piece is followed by an AI-generated summary of 'different views', such as describing Trump's policy as 'a pragmatic reset of US foreign policy'. Soon-Shiong called the new feature a victory for viewpoint diversity. 'No more echo chamber,' he crowed on social media. It looks more like a way to avoid offending President Trump. Let's get real. Many opinion pieces at legitimate publications these days are critical of Trump – for good reason, given the chaotic damage he and his helpers have unleashed. So this effort is less a rooting out of lefty bias than a way to give a platform to pro-Trump views. At well-run news companies, it is journalists themselves – editors, in particular – who can point out unfairness, inaccuracy or bias. And they deal with that, editor to writer, before pieces are published. 'Our members – and all Times staffers – abide by a strict sense of ethics guidelines, which call for fairness, precision, transparency, vigilance against bias, and an earnest search to understand all sides of an issue,' the LA Guild, the union representing the paper's journalists, said in a statement objecting to Soon-Shiong's idea. These days, many of the opinion-side journalists at the LA Times have fled. This is apparently no longer a place where they feel they can do their jobs. Soon-Shiong's gambit is happening in a broader context of media companies yielding to Trump's will, as Axios's Sara Fischer aptly noted. Journalists are doing their jobs, but owners are 'compromised', she wrote, listing some of the most prominent examples: ABC News settled a defamation suit by Trump it could have won; CBS seems poised to settle Trump's absurd claim against its flagship 60 Minutes show; Disney and Paramount have rolled back some DEI policies; the Washington Post's opinion section will reflect owner Jeff Bezos's beliefs about 'personal liberties and free markets'. Some of the bias-meter results so far are simply weird, as in an AI response to an article critical of AI itself. The original piece, by two experts in film production, explored the dangers of AI-generated footage within documentary films and how it could shatter audience trust in the visuals they see. The AI-generated bias meter labeled this piece 'center-left' and provided 'different views'. Another piece, reflecting on the history of the KKK in Anaheim, California, included an AI-generated defense of the Klan at the bottom, as the tech journalist Ryan Mac pointed out. It's since been removed. I can't imagine what reader would want to trot around in this silly circle like a horse on a lead line. Most of us can read a viewpoint article and decide, all by ourselves, without a helpful robot, whether we agree. In the name of viewpoint diversity – but really to push his paper Trump-ward – Soon-Shiong has done far more harm than good. His bias meter should – quickly – go the way of hot type, the manual typewriter, and the dodo. Margaret Sullivan is a Guardian US columnist writing on media, politics and culture