logo
AI Trained on Copyrighted Material Without Permission Poses ‘Direct Threat' to Film Industry, Says BFI

AI Trained on Copyrighted Material Without Permission Poses ‘Direct Threat' to Film Industry, Says BFI

Epoch Times10-06-2025
Artificial intelligence trained on copyrighted material without permission 'poses a direct threat to the economic foundations of the UK screen sector,' a report by the British Film Institute (BFI) says.
Authors
Other issues raised include concerns about the future of the workforce, as well as the need for new skills training for sector employees to be able to adapt to and operate in an AI-enhanced screen industry.
The report on the use of AI in the film, TV, visual special effects, and video game industries published on Monday says the source of training data for generative AI models includes scripts from more that 130,000 films and television shows, as well as databases of pirated books and content uploaded to YouTube.
The analysis was carried out in partnership with CoSTAR, the UK's creative research and development network which is led by Goldsmiths, Loughborough, and Edinburgh universities.
Professor Jonny Freeman, director of CoSTAR Foresight Lab, said that the landscape of AI is complex, with both disadvantages and advantages for its inclusion in the screen sector.
Related Stories
2/25/2025
5/22/2025
Freeman said, 'The report acknowledges that while AI offers powerful tools to enhance creativity, efficiency and competitiveness across every stage of the production workflow – from script development and pre-production planning, through on-set production, to post-production and distribution – it also raises urgent questions around skills, workforce adaptation, ethics, and sector sustainability.'
AI Used in Industry
One of the BFI's recommendations is for the UK to establish AI training licences, enabling deals between intellectual property (IP) rightsholders and AI developers.
'The UK is well-positioned to lead in this space, thanks to its 'gold standard' copyright regime, a vibrant creative technology ecosystem, and a coalition of creative organisations advocating for fair licensing practices.' the report says.
AI is already been used for dubbing and visual effects in film and televisions, notably in the post-production of 'The Brutalist,' starring Adrien Brody, which
Adrien Brody attending the special screening of The Brutalist, at Picturehouse Central Cinema, London, on Jan. 15, 2025.
Ian West/PA Wire
The report also recommends the screen sector build skills complementary to AI, making the workforce resilient to technological changes.
'AI automation may, in time, lower demand for certain digital content creation skills. It may also create new opportunities for roles that require human oversight, creative direction, and technical fluency in AI systems,' it said.
Rishi Coupland, the BFI's director of research and innovation, said the report 'comes at a critical time and shows how generative AI presents an inflection point for the sector and, as a sector, we need to act quickly on a number of key strategic fronts.'
'While it offers significant opportunities for the screen sector such as speeding up production workflows, democratising content creation and empowering new voices, it could also erode traditional business models, displace skilled workers, and undermine public trust in screen content,' he said.
Artists Call for IP Protections
The report was published as the government attempts to pass the Data (Use and Access) Bill, which would allow tech companies to train their AI on copyrighted material unless the creator explicitly opts out. The House of Lords has been demanding an amendment be added to ensure artists are offered copyright protection.
High-profile artists are also calling for better protections for their creative output.
In February,
Culture Secretary Lisa Nandy arrives in Downing Street, London, England, on Oct. 30, 2024.
Lucy North/PA Wire
Last week, Culture Secretary Lisa Nandy sought to reassure the creative sector that artists would not be harmed by future legislation and AI.
Addressing the Media & Telecoms 2025 and Beyond Conference on June 3, she
'I will never stop working for creatives to deliver solutions, transparency and the empowerment that you need in the digital age. We are a Labour government, and the principle of people must be paid for their work is foundational, and you have our word that if it doesn't work for the creative industries, it will not work for us,' Nandy said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

DOGE builds AI tool to cut 50 percent of federal regulations
DOGE builds AI tool to cut 50 percent of federal regulations

Washington Post

time40 minutes ago

  • Washington Post

DOGE builds AI tool to cut 50 percent of federal regulations

The U.S. DOGE Service is using a new artificial intelligence tool to slash federal regulations, with the goal of eliminating half of Washington's regulatory mandates by the first anniversary of President Donald Trump's inauguration, according to documents obtained by The Washington Post and four government officials familiar with the plans. The tool, called the 'DOGE AI Deregulation Decision Tool,' is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE's plans. Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified 'external investment.' The tool has already been used to eliminated more than 1,000 'regulatory sections' at the Department of Housing and Urban Development in under two weeks, according to the PowerPoint, and to write '100% of deregulations' at the Consumer Financial Protection Bureau (CFPB). Three HUD employees — as well as documents obtained by The Post — confirmed that an AI tool was recently used to review hundreds, if not more than 1,000, lines of regulations at that agency and suggest edits or deletions. The tool was developed by engineers brought into government as part of Elon Musk's DOGE project, according to two federal officials directly familiar with DOGE's work, who, like others interviewed for this story, spoke on the condition of anonymity to describe internal deliberations they were not authorized to discuss publicly. Conservatives have long argued that the federal government issues far too many regulations that constrain economic growth and hurt the private sector. Many liberals have emphasized that there are reasons federal regulations are in place, such as protecting the environment and ensuring food safety. Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that 'all options are being explored' to achieve the president's goal of deregulating government. Fields noted that 'no single plan has been approved or green-lit,' cautioning that the work is 'in its early stages and is being conducted in a creative way in consultation with the White House.' Fields added: 'The DOGE experts creating these plans are the best and brightest in the business and are embarking on a never-before-attempted transformation of government systems and operations to enhance efficiency and effectiveness.' One former member of DOGE, which stands for Department of Government Efficiency, wrote in a text message that the team did everything it could to come up with legal and technological solutions to repeal as many regulations as possible within Trump's term. 'Creative deployment of artificial intelligence to advance the president's regulatory agenda is one logical strategy to make significant progress in that finite amount of time,' wrote James Burnham, who served as chief attorney for DOGE and is now managing partner at King Street Legal. The proposed use of AI to accomplish swift, massive deregulation expands upon the Trump administration's work to embed AI across the government — using it for everything from fighting wars to reviewing taxes. And it dovetails with the administration's aim to unwind regulations government-wide, even without AI. But it's unclear if a new, untested technology could make mistakes in its attempts to analyze federal regulations typically put in place for a reason. On Jan. 31, Trump issued an executive order to 'unleash prosperity through deregulation,' which required agencies to repeal 10 rules for every new rule issued. Since then, some departments have engaged in what almost appears to be a competition to cut. In May, the Transportation Department declared it had deleted 52 regulations and more than 73,000 words from the Federal Register. This month, the Labor Department announced plans to nix more than 60 regulations. Still, Republicans have grown frustrated by the relatively slow pace of deregulatory actions. During the first six months of Trump's first term, his administration cut costs by about $550 million and paperwork hours by 566,000, according to the American Action Forum, a center-right think tank that tracks regulations. Through July of this year, the Trump administration has achieved nearly all its cost reductions by repealing one rule regarding what businesses must report about their ownership ties. Without that, the Trump administration would have increased regulatory costs by $1.1 billion and paperwork hours by 3.3 million, according to the think tank. 'They're way behind where they were in 2017 on the numbers, no question about it,' said Doug Holtz-Eakin, president of the American Action Forum and former director of the nonpartisan Congressional Budget Office. 'I thought this was going to be something they crushed because they did so in 2017. I've been baffled by this.' The AI tool is intended to massively accelerate the deregulation process, with every federal agency able to develop a list of regulations to eliminate in less than four weeks, according to the PowerPoint. The agencies are supposed to finish their lists by Sept. 1, and this month, DOGE is supposed to start training staff at agencies on how to use the AI tool, the PowerPoint states. Read the PowerPoint here. While DOGE had pushed earlier this year to take a larger role in the deregulatory effort, the Musk-led team was frequently rebuffed by agency employees who worried about outsourcing decisions and their authorities, according to three people who have participated in deregulatory conversations at the White House and the agency level who spoke on the condition of anonymity to share private conversations. Federal officials also questioned whether DOGE had the subject matter expertise to comb through highly technical regulations and find appropriate targets for cuts, the people said. As DOGE's influence waned following Musk's departure, the administration has remained focused on Trump's deregulatory order, the people said. White House staff are also using internal trackers to monitor how quickly agencies are paring regulations, while leaders at every major agency are meeting regularly to discuss how quickly they can meet Trump's ambitions and which cuts 'count' toward the president's order, according to the people. In some cases, DOGE's campaign to fire federal workers and dramatically shrink the federal workforce has hampered the deregulatory effort, the three people said. 'The White House wants us higher on the leader board,' said one of the three people. 'But you have to have staff and time to write the deregulatory notices, and we don't. That's a big reason for the holdup.' Trump officials have tried to use AI to roll back regulations before. At the Department of Health and Human Services, a 2020 'Regulatory Clean Up Initiative' drew on an AI tool to identify and remove archaic language, defunct federal provisions and outdated terms from federal rules. Trump has pushed the limits of the Administrative Procedure Act, which governs repealing federal regulations, most notably through an executive order ending a rule that restricted the water flow of showerheads. It is unclear if courts will allow the administration to void rules. Meanwhile, private-sector companies tend to be uncomfortable ignoring a rule that was illegally repealed, said Nicholas Bagley, an administrative law expert at the University of Michigan. 'There's been some flashy sideshow efforts to avoid the legal strictures, but in general, they don't stick,' Bagley said of Trump's unilateral efforts to cut regulations. DOGE officials may be concerned about the legality of the AI tool. One page of the slideshow says four people identified as 'DOGE lawyers' — Burnham, Austin Raynor, Jacob Altik and Ashley Boizelle — each 'vetted and endorsed' the AI deregulation tool. Raynor, Altik and Boizelle could not be reached for comment. Federal regulations, as they stand now, can be divided into three categories, the PowerPoint says: 50 percent are not required by law, 38 percent are statutorily mandated and 12 percent are 'Not Required but Agency Needs.' By ending the rules that are both unnecessary by law and to agency operations, the PowerPoint states, the government could recover $3.3 trillion a year. But the PowerPoint also suggests it would take 3.6 million 'man-hours' to nix 100,000 regulations under the current system. It is not clear how the PowerPoint's authors arrived at these figures. That's where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent a grand total of 36 hours gutting half of all federal regulations, the PowerPoint claims. The PowerPoint lists two case studies as examples of how well its AI tool can work, detailing recent efforts to slash regulations at HUD and CFPB. Asked about the AI-driven regulation slashing, a HUD spokesperson wrote in a statement that the agency is having 'ongoing discussions' to consider how to make government more efficient. 'We are not disclosing specifics about how many regulations are being examined or where we are at in the broader process,' the spokesperson said, adding, 'the process is far from final.' The spokesperson continued: 'The intent of the developments is not to replace the judgement, discretion and expertise of staff but be additive to the process.' CFPB did not respond to questions. The Post was not able to independently confirm the use of AI at the agency. At HUD, efforts to use AI to kill regulations began three months ago, according to three employees familiar with the matter and emails obtained by The Post. A message sent to some of the agency's Public and Indian Housing staff on April 18 announced a 'DOGE team' would be 'learning how AI will be able to analyze all PIH regulations looking for and flagging discrepancies between them and the underlying statute.' 'This is a major effort,' the email continued. 'We are working with the lawyers to simplify the [Administrative Procedure Act] process … use AI for drafting, and use AI for complying notices in the future.' The overall goal, the email noted, was to deploy AI to reduce the time staff had to spend on deregulation. Another document, signed 'HUD DOGE Team' and sent to staff, detailed how DOGE team members wanted federal staffers to engage the AI tool. Staffers were supposed to look over the tool's recommendations for proposed regulatory eliminations and mark whether they agreed, disagreed or believed deletions should go further. One HUD employee who participated in this process said the AI tool made several errors. It delivered an analysis saying those who drafted various agency regulations had misunderstood the law in several places, said the employee, who spoke on the condition of anonymity to reveal internal conversations. But the AI tool was sometimes wrong, the employee said. 'There were a couple places where the AI said the language was outside of the statute,' the employee said, 'and actually, no — the AI read the language wrong, and it is actually correct.' After its tryout at HUD, the AI deregulation tool is supposed to deploy across the rest of government in coming months, according to the DOGE PowerPoint. Over the next five months, agencies will work with the AI tool to identify regulations to kill, respond to public comments about the proposed deletions and submit formal deregulation proposals, the PowerPoint says. The goal is to wrap everything up and 'Relaunch America on Jan. 20, 2026,' the PowerPoint states.

What Ancient Farmers Can Teach The Modern Boardroom About AI Strategy
What Ancient Farmers Can Teach The Modern Boardroom About AI Strategy

Forbes

time42 minutes ago

  • Forbes

What Ancient Farmers Can Teach The Modern Boardroom About AI Strategy

Ancient Farmer and Modern Day AI User Every day in 2025 brings a new AI milestone. From generative tools rewriting code to AI copilots augmenting medical diagnoses, it's easy to feel we've entered unprecedented terrain. But we've been here before. Just ask our ancient ancestors—who faced the first great disruption when they transitioned from hunting and gathering to agriculture. That shift didn't just change how we ate. It transformed how we lived, worked, governed, and grew. The parallels to today's AI revolution are striking—and instructive. If history is any guide, organizations that manage technological transitions through thoughtful governance, strategic investment in people, and adaptive policies will not only survive but thrive. From Stone Tools to Silicon Chips: How Societies Adapt to Disruption Roughly 12,000 years ago, human communities in the Fertile Crescent began cultivating crops and domesticating animals. Archaeological sites like Abu Hureyra in modern-day Syria reveal this wasn't a sudden break from the past but a gradual, iterative process of learning, testing, and integrating new tools into existing ways of life. This evolutionary—not revolutionary—mindset offers a key lesson: Transformation doesn't mean total disruption. It means layering innovation onto what works and building systems that scale over time. The AI transition requires a similar approach: pilot programs, feedback loops, reskilling, and workforce support must evolve hand-in-hand with technological integration. Strategy: AI Integration Requires Incremental Adaptation Just as early agricultural societies developed irrigation systems and record-keeping via clay tablets, modern organizations are building the infrastructure to support AI-enabled workflows. But success hinges not on speed, but on sequencing. McKinsey reports that although generative AI could add up to $4.4 trillion in global productivity annually, only 21% of companies have adopted AI in more than one business function as of 2023. This isn't a failure—it's a sign that organizations are proceeding thoughtfully. Boards and CFOs should be tracking AI ROI, not only in terms of cost savings but in how it reshapes value creation. That means budgeting for phased implementation and workforce transformation simultaneously. Policy: Redesigning Governance for Human-AI Collaboration Early civilizations didn't just invent tools—they also wrote laws. The Code of Hammurabi, dating to 1750 BCE, is one of the first known legal frameworks for managing agricultural property, labor, and dispute resolution. Today's version? AI ethics policies, data governance protocols, and algorithmic accountability. Regulators are already acting. The EU's AI Act, passed in 2024, imposes risk-based requirements for transparency, bias mitigation, and human oversight. Meanwhile, the European Sustainability Reporting Standards (ESRS) specifically S1 and S2, under the guidance of EFRAG, now require companies to report on the impact of AI on workforce strategy and human capital governance. Boards must go beyond compliance. They must ask: The fiduciary risks are real. A poorly governed algorithm can expose companies to legal liability, reputational damage, and talent loss. Treat AI governance as a board-level issue, not just a technical one. Programs: Cultivating Workforce Resilience and Knowledge Transfer Farming didn't eliminate the need for hunting. Early societies maintained both capabilities as insurance against failure. Similarly, organizations shouldn't rush to fully automate without investing in human capital. This means reskilling must become the norm. According to PwC study on workforce hopes and fears, 40% of workers will need up to six months of training to remain relevant in the AI economy. High-performing companies are already taking the lead. AT&T's collaboration with Udacity to create nano-degree programs reduced reskilling time by 35% and boosted internal mobility. Boards and CHROs should champion programs that: If early societies could preserve astronomy, crop rotation, and animal husbandry without Google, we can certainly codify AI literacy and strategic workforce knowledge today. Measuring What Matters: Human Capital as a Source of Value The agricultural revolution spurred population growth, urbanization, and eventually, modern economies. Today's AI revolution will reshape how we measure value—especially in intangible assets like skills, collaboration, and creativity. Research by Alex Edmans shows that companies investing in employee well-being significantly outperform their peers in long-term shareholder returns. This is why ISO 30414 and SEC human capital disclosure expectations are gaining traction. Human Capital ROI (HCROI) should be tracked with the same rigor as Return on Equity or Investment. Boards should demand metrics that show not just AI adoption but how it enhances organizational resilience and workforce productivity. Final Thought: Change Is Inevitable. Human Judgment Is Indispensable. AI may feel new, but the pattern is ancient. Transformation is never just about tools—it's about how we govern change, support people, and sustain growth. The societies that thrived in the wake of agriculture weren't the most technologically advanced. They were the ones that integrated new tools into stable, human-centered systems. As we stand at the threshold of another era, it's time to remember what worked the first time: governance, adaptation, and investment in human capability. Let's not forget—we've been here before. Postscript: With appreciation to Stela Lupushor, whose co-authored blog post with me on this topic served as inspiration for this column.

AI Founder Pays $38.2 Million for Beachfront Miami-Area Penthouse
AI Founder Pays $38.2 Million for Beachfront Miami-Area Penthouse

Wall Street Journal

time42 minutes ago

  • Wall Street Journal

AI Founder Pays $38.2 Million for Beachfront Miami-Area Penthouse

This spring, 42-year-old tech entrepreneur Daniel Nadler gave up his Miami rental apartment and moved into a beachfront hotel. The goal was to streamline his life and focus on building OpenEvidence, his Google-backed medical AI company, which is valued at $3.5 billion. 'I didn't want the overhead of dealing with houses and all of the stuff that comes with houses,' he said. 'If I could wake up at 4 a.m. and just order room service—this is so perfect.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store