logo
Copyright Office Offers Assurances on AI Filmmaking Tools

Copyright Office Offers Assurances on AI Filmmaking Tools

Yahoo29-01-2025
The U.S. Copyright Office declared Wednesday that the use of artificial intelligence tools to assist in the creative process does not undermine the copyright of a work.
The announcement clears the way for continued adoption of AI in post-production, where it has become increasingly common, such as in the enhancement of Hungarian-language dialogue in 'The Brutalist.' Studios, whose business model is founded on strong copyright protections, have expressed concern that AI tools could be inhibited by regulatory obstacles.
More from Variety
'Clean' AI Video Model to Launch in Early 2025 Targeting Hollywood Clients (EXCLUSIVE)
WGA Sends Letter to Studios, Urging Lawsuits Against AI Plagiarism: 'Inaction has Harmed WGA Members'
Two Animation Guild Leaders Oppose Contract Over AI Provisions: 'This Is the Time to Fight'
In a 41-page report, the Copyright Office also reiterated that human authorship is essential to copyright, and that merely entering text prompts into an AI system is not enough to claim authorship of the resulting output.
This is the first time the Copyright Office has weighed in on the issue since March 2023, just a few months after the release of ChatGPT. The report broadly aligns with the office's earlier positions, though it offers greater assurance of AI's legitimacy when used to supplement the creative process.
'The use of AI tools to assist rather than stand in for human creativity does not affect the availability of copyright protection for the output,' the report states.
Consistent with its earlier guidance, the office also held that a work is eligible for copyright protection if the author creatively 'selects and arranges' AI-generated elements.
'Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material,' the report states.
The report marks a significant event in the two-year debate over AI in the creative fields. When the Copyright Office asked for input on the issue in 2023, it received more than 10,000 comments, including many from artists and musicians who argued that AI steals their work and poses a grave threat to their livelihoods.
This is the second of three AI reports based on that input. The first, issued last July, called for legislation to combat AI-generated replicas that mimic a person's voice and likeness. A third report will tackle the fraught debate over whether AI models should be allowed to 'train' on copyrighted work without a license.
In its initial AI guidance two years ago, the office emphasized that work created by a machine is not eligible for copyright protection. Copyright registrants were directed to disclaim any AI-generated material.
The Motion Picture Association, which represents seven major studios, took issue with that provision, saying it was 'misguided' and that it would prove burdensome and unworkable in the context of films and TV shows. The MPA cited a series of post-production processes — such as de-aging actors, removing unwanted objects from shots, and rotoscoping — that can benefit from AI.
'Artists have expressed enthusiasm for AI tools that enhance their work, and for continued technological development of these and similar tools,' the MPA wrote. 'In short, the use of AI technology presents developing opportunities for creators and their audiences. MPA's members are optimistic about that future.'
In its report, the Copyright Office referenced the MPA's comments about de-aging and other post-production effects, and said it agrees that 'assistive uses that enhance human expression do not limit copyright protection.'
The office also said it has no issue with AI being used as a brainstorming tool, or to create outlines for literary works.
Where it draws the line is with systems like Midjourney, which can generate images based on simple text prompts. To illustrate its point, the Copyright Office used Google's Gemini to create an image of a cat smoking a pipe.
Many elements of the image — such as the human hand — appear to be random, the office found, concluding that the user does not have sufficient control to claim authorship.
Some commenters argued that AI image generation is an iterative process and that the user can assert control through multiple revisions of the prompts. The Copyright Office was not persuaded by that argument.
'By revising and submitting prompts multiple times, the user is 're-rolling' the dice, causing the system to generate more outputs from which to select, but not altering the degree of control over the process,' the report states.
The office also rejected the idea of creating additional copyright protection for AI-generated work, taking note of potential threats to human creators, while acknowledging that 'its effects on employment are difficult to predict.'
'We share the concerns expressed about the impact of AI-generated material on human authors and the value that their creative expression provides to society,' the office stated. 'If authors cannot make a living from their craft, they are likely to produce fewer works. And in our view, society would be poorer if the sparks of human creativity become fewer or dimmer.'
Best of Variety
What's Coming to Netflix in February 2025
New Movies Out Now in Theaters: What to See This Week
What's Coming to Disney+ in February 2025
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Palo Alto's forecasts signals AI boost for cybersecurity tools
Palo Alto's forecasts signals AI boost for cybersecurity tools

Yahoo

time30 minutes ago

  • Yahoo

Palo Alto's forecasts signals AI boost for cybersecurity tools

(Reuters) -Palo Alto Networks forecast fiscal 2026 revenue and profit above Wall Street estimates on Monday, betting on growing demand for its artificial intelligence-powered cybersecurity solutions, sending its shares up nearly 5% in extended trading. The company has been benefiting from an AI-driven upgrade cycle as enterprises accelerate their cloud adoption and modernize security operations amid rising data breach incidents. A wave of high-profile cyberattacks has hit global companies including Microsoft, UnitedHealth Group, Walt Disney and Oracle, prompting the need for robust security solutions. Palo Alto's new launches such as cloud security platform 'Cortex Cloud' and security platform to protect AI apps 'Prisma AIRS', together with its planned $25 billion CyberArk acquisition, deepen its cybersecurity offerings. The company also announced that founder and Chief Technology Officer Nir Zuk will be retiring after being with Palo Alto for over 20 years. Long-time product leader Lee Klarich has been named as its CTO and board member. Klarich will also chair the board's security committee in a bid to bolster the company's AI-driven platform strategy. Palo Alto competes with CrowdStrike, Fortinet and Zscaler. Its customers include Salesforce, Dell Technologies and NetApp. The company projected annual revenue between $10.48 billion and $10.53 billion, above analysts' average estimate of $10.43 billion, according to data compiled by LSEG. It expects adjusted profit per share of $3.75 to $3.85, above estimates of $3.67 for the fiscal year. The company's first-quarter revenue forecast of $2.45 billion to $2.47 billion came in above expectations of $2.43 billion. Its adjusted quarterly earnings per share of 88 cents to 90 cents was also above estimates of 85 cents. Palo Alto reported upbeat fourth-quarter results. Its revenue grew 16% to $2.54 billion from a year ago. It reported adjusted EPS of 95 cents for the quarter ended July 31, beating estimates of 88 cents.

All-In Podcast Boys Poke Fun at Uber Founder's ‘AI Psychosis' (Which They Encouraged)
All-In Podcast Boys Poke Fun at Uber Founder's ‘AI Psychosis' (Which They Encouraged)

Gizmodo

time31 minutes ago

  • Gizmodo

All-In Podcast Boys Poke Fun at Uber Founder's ‘AI Psychosis' (Which They Encouraged)

Remember when the guys over at the All-In podcast talked with Uber founder Travis Kalanick about 'vibe physics'? Kalanick told viewers that he was on the verge of discovering new kinds of science by pushing his AI chatbots into previously undiscovered territory. It was ridiculous, of course, since that's not how an AI chatbot or science works. And Kalanick's ideas got ridiculed to no end by folks on social media. But the gentlemen of All-In now seem to be distancing themselves from Kalanick's ideas, even suggesting it could be related to the rise of 'AI psychosis,' despite the fact that they were more than happy to entertain the Uber founder's rambling nonsense when he was on the show. Kalanick appeared as a guest on the July 11 episode of All-In, explaining very earnestly how he was on the cusp of discovering exciting new things about quantum physics, previously unknown to science. 'I'll go down this thread with [Chat]GPT or Grok and I'll start to get to the edge of what's known in quantum physics and then I'm doing the equivalent of vibe coding, except it's vibe physics,' Kalanick explained. 'And we're approaching what's known. And I'm trying to poke and see if there's breakthroughs to be had. And I've gotten pretty damn close to some interesting breakthroughs just doing that.' The reality is that AI chatbots like Grok and ChatGPT are not capable of delivering new discoveries in quantum physics because that's beyond their capabilities. They spit out sentences by remixing and rehashing their training data, not by testing hypotheses. But All-In co-host Chamath Palihapitiya thought Kalanick was on to something, taking it a step further by insisting that AI chatbots could just figure out the answer to any problem you posed. 'When these models are fully divorced from having to learn on the known world and instead can just learn synthetically, then everything gets flipped upside down to what is the best hypothesis you have or what is the best question? You could just give it some problem and it would just figure it out,' said Palihapitiya. This kind of insistence that AI chatbots can solve any problem is central to their marketing, but it also sets up users for failure. Tools like Grok and ChatGPT still struggle with basic tasks like counting the number of U.S. state names that contain the letter R because that's not what large language models are good at. But that hasn't stopped folks like OpenAI CEO Sam Altman from making grandiose promises. Co-host Jason Calacanis was the only one to suggest that perhaps Kalanick was misunderstanding his own experience during the July 11 episode. Calacanis asked Kalanick if he was 'kind of reading into it and it's just trying random stuff at the margins.' The Uber founder acknowledged that it can't really come up with a new idea, but said it was only because 'these things are so wedded to what is known.' Kalanick compared it to pulling a stubborn donkey, suggesting it was indeed capable of new discoveries if you just worked hard enough at it. You'd expect that to be the last word on the topic, given the fact that the All-In guys like to avoid controversy. They infamously failed to produce an episode of the podcast the week that Elon Musk and President Trump had their blowout. (The podcast hosts are all friends with Musk, and co-host David Sacks is Trump's crypto czar.) So listeners of the new episode may have been a bit surprised to hear Kalanick's weird ideas discussed again, especially if it was to poke fun at him. The latest episode of All-In, uploaded on Aug. 15, opened with a discussion of so-called 'AI psychosis,' a term that hasn't been defined in medical literature but has emerged in popular media to discuss how people who are struggling with their mental health might see their symptoms exacerbated by engaging too much with AI. Gizmodo reported last week about complaints filed with the FTC about users experiencing hallucinations, egged on by ChatGPT. One complaint even told of how one user stopped taking his medication because ChatGPT told him not to at the same time as he was experiencing a delusional breakdown. AI psychosis isn't a clinical term, and it's hard to determine the precise number of people who are experiencing severe strains on their mental health from the use of AI chatbots. But ChatGPT's creator, OpenAI, has acknowledged that it's a problem. And Calacanis opened the show talking about how people can get 'one-shotted,' the new slang co-opted from video games and used for people who fall too deep into the AI rabbit hole. They anthropomorphize AI and fail to understand it's just a computer program, sending themselves into a delusional spiral. 'You may have even witnessed a little bit of this when Travis [Kalanick] was on the program a couple weeks ago and he said he was like spending his time on the fringes or the edges of… physics,' Calacanis said. 'It really can take you down the rabbit hole.' 'Are you saying Travis is suffering from AI psychosis?' co-host David Friedberg asked. 'I'm saying we may need to do a health check. We may need to do a health check because smart people can get involved with these AI. So we may have to do a little welfare check on our boy TK,' Calacanis said, seemingly in earnest. Palihapitiya seemed to think the underlying problem with AI psychosis was just a product of the so-called loneliness epidemic, but he ignored his own role in feeding Kalanick's narrative that AI chatbots were truly capable of new discoveries in science. David Sacks wasn't having it, insisting that AI psychosis was just a moral panic similar to fears 20 years ago over social media. 'This whole idea of AI psychosis, I think I gotta call bullshit on the whole concept. I mean, what are we talking about here? People doing too much research?' Sacks said, trying to downplay the news reports. 'This feels like the moral panic that was created over social media, but updated for AI.' Sacks admitted there was a mental health crisis in the U.S., but didn't believe it was AI's fault. And there's probably some truth to what Sacks is saying. All new technologies include some form of social upheaval and worries about what a given invention might mean for the future. But there's also no denying that people are much lonelier and isolated since the advent of social media. And that may not all be social media's fault. But revolutionary technologies will inevitably have both positive and negative impacts on society. The question is always whether the positives outweigh the negatives. And the jury is arguably still out on both social media and AI chatbots.

Google AI Pioneer Employee Says to Stay Away From AI PhDs
Google AI Pioneer Employee Says to Stay Away From AI PhDs

Entrepreneur

time31 minutes ago

  • Entrepreneur

Google AI Pioneer Employee Says to Stay Away From AI PhDs

Jad Tarifi, who founded Google's first generative AI team and has a Ph.D. from the University of Florida, wouldn't recommend starting a program now. AI researchers are in high demand, with some offered billion-dollar compensation packages from Meta amid the ongoing AI talent wars. However, one AI pioneer, Jad Tarifi, who founded Google's first generative AI team after obtaining a Ph.D. in AI, would not recommend higher study to break into the field. In a new interview with Business Insider, Tarifi, 42, predicted that within the five to seven years it takes to obtain a Ph.D., most of AI's problems will be solved. "Even things like applying AI to robotics will be solved by then," Tarifi told BI. Related: AI Is Going to 'Replace Everybody' in Several Fields, According to the 'Godfather of AI.' Here's Who He Says Should Be 'Terrified.' Tarifi explained that obtaining a Ph.D. was only for "weird people" who were "obsessed" with a certain field because higher education required "a lot of pain" and at least five years of their lives. He recommended staying away from the Ph.D. route altogether or choosing to specialize in a subfield of AI that is still in its early stages, like AI for biology. Tarifi received a Ph.D. in 2012 from the University of Florida, where he worked on an AI theory that combined principles from neuroscience, geometry, and machine learning, according to his LinkedIn. He then joined Google, where he became a tech lead and manager for nearly a decade, working on models for Google's generative AI projects. Tarifi is now the founder and CEO of Integral AI, a startup that focuses on creating AI agents to act autonomously on behalf of users. Related: These 3 Professions Are Most Likely to Vanish in the Next 20 Years Due to AI, According to a New Report In the BI interview, Tarifi also warned prospective students from completing degrees in law and medicine, arguing that the information in these programs was "outdated" and memorization-based. Tarifi isn't the first person to warn students away from higher degrees. Venture capitalist Victor Lazarte said earlier this year that AI is "fully replacing people" in the legal profession. He predicted that AI would take over entry-level legal positions usually filled by recent law school graduates within the next three years. Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store