
Transparency Deferred: What The UK's Data Bill Means For Music, AI And Copyright
After months of political turbulence, the UK's Data (Use and Access) Bill has finally passed Parliament. Marketed as a major update to the country's data infrastructure and digital governance, the bill covers everything from NHS data interoperability to digital ID systems and AI-enabled decision-making. The text is broad in scope, modernizing the UK's GDPR, streamlining data subject access, and enabling more fluid data sharing across public services and smart infrastructure.
However, it also weakens restrictions on automated decision-making and sidesteps key copyright issues raised by AI.
Although the Data Bill does not legislate on copyright directly, creative industries had hoped it would include minimal safeguards for the use of copyrighted works in AI training.
In parallel to the bill, the government has signaled, through its consultation on generative AI conducted by the UK Intellectual Property Office, support for a model that would allow AI developers to mine copyrighted content by default, unless rights holders explicitly opt out. This mirrors the EU's controversial text and data mining (TDM) exception, a proposal that many in the creative industries see as deeply problematic.
Attempts to introduce a transparency duty for AI developers were proposed and passed repeatedly in the Lords, but were ultimately rejected in the Commons for the sixth and final time.
For the UK's creative sectors, music, publishing, film, and visual arts, which collectively generate over £124 billion annually, the final version of the bill represents a missed opportunity and a potentially dangerous precedent. It leaves songwriters, recording artists, and rights holders unable to determine whether their work has been ingested into AI training datasets, with no clear obligation for companies to provide transparency or seek permission.
Training Without Traceability
The Lords amendment, proposed by Baroness Beeban Kidron, became the focal point of this battle. Her proposal was straightforward: require AI developers to disclose what datasets and copyrighted material they used to train generative AI systems. The amendment kept passing in the Lords with growing support, only to be killed repeatedly in the Commons. The government refused to accept it, claiming that it would stifle innovation and that copyright would be addressed in a separate AI-specific bill after a public consultation.
But as Jane Clementson, a lawyer who advises media and creative businesses on the creation and exploitation of intellectual property, explains, the government had a ready excuse: "The DUA Bill was never intended to address copyright law, so amendments about AI training data were resisted. The Government's view was that this wasn't a copyright bill—wrong vehicle for such a complex issue." This reasoning allowed ministers to sidestep the core issue while promising to address it later in a separate AI bill.
The current UK copyright framework under Section 29A of the Copyright, Designs and Patents Act allows TDM only for non-commercial research purposes. This means AI developers may lawfully copy and analyze copyrighted content only if the use is non-commercial, and even then, only under specific conditions.
The Data Bill does not change this legal provision. However, it fails to strengthen copyright protections or clarify enforcement, despite the rapid growth of commercial AI training models.
'Support for innovation shouldn't come at the cost of fairness,' explains Rick Gleaves, a music-tech strategist and founder of Music Foundry. 'The current trajectory risks building AI systems on the backs of unlicensed creative works, music, lyrics, performances, without attribution or compensation. That's not a sustainable model.'
Because the law lacks meaningful enforcement and does not mandate dataset disclosure, AI developers can ingest massive libraries of music and argue that their use remains 'non-commercial' as long as they don't sell the original content directly. Instead, they train generative models that create synthetic outputs which compete directly with the original works, often replicating stylistic, lyrical, or sonic elements. And the value extracted at the training stage powers downstream applications and services generating vast profits.
'The refusal to amend the bill weakens the UK's standing as a defender of copyright and the creative industries,' argues Gleaves. 'We've traditionally prided ourselves on striking a fair balance between innovation and rights protection, but this bill tips the scales toward data access and AI development without adequate safeguards for creators.'
The asymmetry is stark. Developers gain free rein to mine cultural data while creators remain in the dark. The proposed opt-out mechanism might sound like a compromise, but without mandatory transparency, it becomes meaningless in practice. Rights holders cannot opt out of training datasets they aren't even aware they're part of.
Clear Law, Rampant Violations
These concerns aren't theoretical. The International Confederation of Music Publishers (ICMP), the global trade body representing Majors, Indies and 80 different national trade associations across 6 continents has documented clear evidence that commercial AI systems, including Suno, Udio, Gemini, and DeepSeek, have been trained on unlicensed music. When prompted, despite claims of safeguards, these systems can generate synthetic outputs that replicate the sonic and lyrical fingerprints of songs despite no licensing agreements being in place.
The legal requirement is clear: AI developers must license copyrighted material when using it for commercial purposes. The problem arises because enforcement has fallen behind and multiple lawsuits show just how blurred the lines have become. Yet the following examples collected by ICMP contradict government claims that enforcement is premature, showing that unlicensed reproduction is already widespread and increasingly sophisticated.
- When prompted to analyze the lyrics of "Billie Jean" by Michael Jackson, Google's Gemini model outputs the full lyrics, despite no licensing agreement with rights holders. This directly contradicts claims made by some AI developers and policymakers that generative systems are trained only on "non-consumptive" data or that robust filters are in place to prevent reproduction of copyrighted content.
Evidence of Gemini lyrics display of Billie Jean - DeepSeek, a Chinese-developed model, goes even further. It can reproduce full copyrighted lyrics, including recent songs, formatted and tagged with metadata scraped from platforms like Spotify. This suggests an intentional bypass of standard licensing practices and highlights how easily some developers evade rights protections.
Lyric access directly on spotify - In Germany, the collection society GEMA has flagged a Suno-generated track for strong similarities to Alphaville's 1984 hit "Big in Japan." According to GEMA, the AI-generated version reproduces the lyrics almost verbatim, with matching phrasing and structure, despite no licensing deal existing between Suno and Alphaville's rights holders. GEMA has filed a lawsuit against Suno for copyright infringement, further alleging that the company has reproduced protected works without permission across jurisdictions. The suit also cites additional tracks allegedly copied from Alphaville (Forever Young), Kristina Bach (Atemlos durch die Nacht), Lou Bega (Mambo No. 5), Frank Farian (Daddy Cool), and Modern Talking (Cheri Cheri Lady). - Similarly, Udio, a fast-growing AI music generator, has been shown to produce songs that closely imitate the Beatles' musical style, lyrical tone, and even vocal timbre. Prompts like 'write a Beatles-style ballad about longing' yield tracks that mirror the harmonic structure, instrumentation, and production techniques of Lennon–McCartney compositions. While not replicating lyrics verbatim, the outputs often share thematic content, rhyming patterns, and arrangements, effectively creating derivative works. Udio has no licensing agreement with Apple Corps, Sony/ATV, or any entity managing the Beatles' catalog, making these outputs clear examples of unlicensed stylistic appropriation. (This example is drawn from the ICMP evidence submission).
"It actually doesn't need to work this way," says John Phelan, CEO of ICMP. "Ours is an industry built on exclusive rights, and what that literally means is not so much that we want to restrict use, some other creative sectors are much less willing to license works, but rather that commercial users need prior authorization to be legal."
The irony runs deeper. ICMP's analysis of tech company contracts reveals a telling double standard. Google, Microsoft, Meta, OpenAI, Suno, Udio and others all include clauses demanding "no use of our content without express prior written permission."
Calling out the contradiction, Phelan says: "We in the music industry should not be reluctant to point out this commercial hypocrisy and demand total respect for our songwriters' property rights.'
Effectively, the UK's copyright landscape has no lack of clarity; rights holders already have control and licensing training data for commercial gen AI is required by law. But Science and technology secretary Peter Kyle mentioned several times that his preferred outcome on AI and copyright is a new copyright exception allowing unlicensed training and that UK copyright law is currently uncertain, suggesting protesting artists are just 'people who resist change'.
The Politics of AI-First Growth
To understand how Britain reached this point, look beyond the legislation to the political backdrop. Labour won a commanding 403 out of 650 MPs but with only 33% of the national vote, strong legislative control built on fragile public support. Desperate for economic wins, the Starmer government has bet heavily on positioning the UK as a global AI hub. The Data Bill serves that agenda: deregulate, incentivize, and let innovation flourish. 'By rejecting the Lords' transparency amendments and deferring copyright enforcement to a vague future bill, the government has effectively given AI developers free fuel in the form of unlicensed cultural content' observes Jake Beaumont Nesbitt, Consulting Artist Manager and advisor on entertainment tech, director of Innovation at International Music Managers Forum. But he also notes that the UK Government just backed the music industry with a £30 million investment package and adds 'One could see this as a bunch of flowers offered to the creative industries after the Government ran off with the Tech Sector.'
Ministers frame the bill as a growth engine, freeing NHS staff from admin and energizing fintech services. Copyright, they argue, is too complex to address in legislation aimed at "improving people's lives."
But peers openly accused the government of bowing to Big Tech lobbying. New figures obtained by Democracy for Sale reveal that Labour ministers and senior civil servants met with tech industry executives and lobbyists an average of six times a week during the government's first six months in office. Peter Kyle asked Google's head of AI, Demis Hassabis to 'sense check' AI policy and Hassabis is now a formal advisor on the government's AI plan. The Technology Secretary also said he would 'advocate' for Amazon at the UK's competition regulator and their case against Amazon was then dropped.
As Baroness Kidron put it: "Silicon Valley has persuaded the government that it's easier to redefine theft than make them pay for what they have stolen.' And Phelan confirmed 'Artificial Intelligence covers a multitude of different services. It's as specific as using the word technology. But suffice to say the music industry has been longstanding adopters of and innovators in AI, from licensing admin to searching song databases, copyright infringement prevention to amazing new visual effects to enhance the concert experience. Any characterisation of AI and the music industry as antiphonal is way wide of the mark."
The Promise of Tomorrow
Technology Secretary Peter Kyle has promised a "comprehensive" AI bill that will revisit transparency and opt-out mechanisms, potentially arriving by May 2026. The government committed to publishing reports on copyright and AI within nine months, including analysis of economic impacts on creators and developers.
Jane Clementson explains: 'The Secretary of State is obliged, within nine months of Royal Assent being given, to publish an impact assessment of the economic impact of each of the four policy options described in the government's recent Copyright & AI consultation paper — including the impact on copyright owners and development users.' 'They must also present a report to Parliament on how copyrighted works are being used to develop AI systems,' she adds.
But creative industries worry this delay is strategic. And as Beaumont Nesbitt points out: 'Rather than a well-thought-out long-term strategy, this hands-off approach is a short-term political gamble. The Government believes it's too soon to regulate, and is giving these (mostly ex-UK) companies not only a green light, but free fuel.'
Each month of delay invites further ingestion of Britain's cultural catalogue, expanding AI libraries at zero cost while eroding the scarcity on which copyright economics rests. As Nick Breen, Partner at the global law firm Reed Smith LLP, explains, here's what to expect next: 'Now that the government has committed to providing a report, we can expect intense lobbying from both sides—on everything from transparency and copyright exceptions for training, to international interoperability, protection of AI outputs, and image rights. Given the UK's hesitation to legislate prematurely, it now faces pressure to offer clarity and show how it has balanced competing interests. In the meantime, ongoing litigation—such as Getty's case against Stability AI in the High Court—will continue to shape the landscape.'
But by the time comprehensive legislation arrives, the market may have already normalized unlicensed training. John Phelan makes clear that: 'To date, I have still not seen any provable pathway to becoming more economically competitive by way of a government reducing copyright standards. There is no credible evidence that if you make that industrial policy choice, an increased influx of foreign direct investment or bolstering of the start-up economy follows suit.'
A Cultural Reckoning
The Data Bill represents more than policy, it's a cultural turning point. In 1710, Westminster Parliament passed the Statute of Anne, establishing copyright terms that protected authors' rights for 14 years. That principle has served the UK's economy and culture for over 300 years, evolving through numerous updates, including international agreements, and adapting to new technologies while preserving its core spirit.
Now, as Beaumont Nesbitt warns, Parliament risks "strangling the new model for creators at birth." In an era where an artist's Name, Image, Likeness, and Voice increasingly drive value, allowing unlicensed use threatens not just revenue but the entire incentive structure for creativity.'
The Stakes Couldn't Be Higher
UK Music CEO Tom Kiehl described the bill's passage as a "pyrrhic victory at best." The music industry has made its position clear: this isn't just about revenue, but about agency, authorship, and fairness in a rapidly changing technological landscape. The creative sector generates £124 billion annually for the UK economy. The government's gamble is that AI growth will more than compensate for any damage to traditional creative industries.
Whether the UK can remain both a world leader in culture and a hub for trustworthy AI depends on closing the transparency gap fast. Without action, Britain risks trading a short-term AI lead for the long-term erosion of its most iconic export: creativity. The Data Bill aimed to modernize infrastructure but instead ignited one of the most urgent cultural fights of the AI age.
By rejecting transparency, despite wide support across sectors, the government has given generative AI firms a powerful advantage: access without accountability. The next legislative chapter will define not just the future of British music, but the country's reputation as a place that values creativity in the age of artificial intelligence.
The question remains: is the UK willing to sacrifice its own cultural industries for a marginal advantage in the global AI race?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
OpenAI CEO joins chorus of industry experts warning about "AI bubble"
Tech giants have made clear that they'll spare no expense in their efforts to win out in the AI rat race. So much so, that tech giants like Meta () , Microsoft () , Amazon () , and Google () planned to spend up to $320 billion on AI tech in 2025. So when Microsoft CEO Satya Nadella, who augured the great LLM-ification of AI and became a major investor in ChatGPT creator OpenAI in 2019, warned about "overbuild" of data centers after being part of the cohort signaling their spending ambitions, some analysts and industry experts seemed to perk up. Even more so when Alibaba co-founder Joe Tsai echoed Nadella's concerns, calling the buildout in AI datacenters a "bubble." Microsoft, meanwhile, went on to deny additional capacity from hyperscaler CoreWeave () , which was in the process of IPOing (that capacity was bought by OpenAI instead.) Only, nobody really cared. On Wall Street, valuations in AI-fueled trades were taking off. The only real segway was April's tariff tiff. Then, it was back to all-time highs for U.S. equities. The attitude was: "don't fight the tape." In some techno-optimist circles, the advent of superintelligent AI was seen just around the corner. Their attitude: "Why sell now?" Investors might be more wary now, thanks to a recent MIT study warning that businesses are not seeing returns from AI investments. And making matters worse, more industry experts are warning that investors got ahead of themselves. MIT drops AI spending bombshell MIT researchers studied 300 businesses and how they were using AI and found that, despite claims that the businesses had invested $30 to $40 billion into generative AI, only 5% of companies had seen any return thus far. Where industry anecdotes fell on deaf ears, the MIT study cut through the noise. Immediately after the report dropped, so too did tech stocks. And dogpiling on, even more industry leaders are joining the chorus, explicitly calling out what they see as an AI bubble. OpenAI CEO Sam Altman warns of bubble Among them are OpenAI CEO Sam Altman, who said in plain terms that, "investors as a whole are overexcited about AI." While emphasizing its long-term important, Altman cautioned that investors could get "burnt" by the 'dot com-like' dynamic in the market. Unfortunately, it won't just be the people betting on high-flying names like Palantir () . Many Americans' 401(K)s, IRAs, and brokerage accounts are tied up in indexes which are heavily exposed to the AI trade. In fact, these tech giants represent over a third of the S&P 500's weight. Altman remains an optimist over the long run, casting issues with his firm's latest frontier AI model as a "misfire" and promising an even more fantastic sequel in GPT-6. But that's what many AI optimists were hoping for GPT-5. And waiting even longer for "the future" to arrive might mean expending their optimism. That's not to say that AI models (including at competitors) are not progressing, but what investors' willingness to allow firms to become capital intensive businesses might not last much longer if they don't see light at the end of the tunnel. While they've learned to love the stratospheric growth coming from AI chipmaker Nvidia () and the double-digit strides in cloud services from Microsoft, Amazon, and Google, wariness about payoff might prompt a pullback. Mark's spend-a-thon comes to a close There's some evidence that it's already come — or maybe, somehow, AI is already replacing jobs — at Meta. CEO Mark Zuckerberg spent big to acquire AI talent and build out data centers. He's now almost fresh out of cash and looking to private credit to shore up his ambitions. Still, if you're confident there's a payoff, why pullback? Per WSJ, Meta is in the process of reorganizing its AI segment into different businesses prioritizing business endpoints. With it, exec departures, layoffs, and a hiring freeze. Is this a sign that Zuckerberg and management have looked around and collectively discovered that they're buying the top? Is this an unfortunate repeat of the company's failed metaverse ambitions? Or is this a wake-up call from within after squandering billions on comp packages for researchers and data centers? Too early to say, but after blowing through $31.8 billion in the last six months, you'd have to wonder if maybe, the industry gurus called it how it was. Now that Wall Street finally seems to be paying attention, what does that bode for the market? This story was originally reported by TheStreet on Aug 21, 2025, where it first appeared in the Investing News, Analysis, and Tips section. Add TheStreet as a Preferred Source by clicking here.


Fast Company
an hour ago
- Fast Company
Investing in early-in-career talent is vital to win the AI race
advertisement AI is fundamentally changing how we work. People will increasingly oversee more AI agents, changing the way we think about teams. Business leaders must shape what's next—not shrink from it. From job elimination to job evolution EIC employees are AI natives who are already leading the transformation. They intuitively engage with tech, bring creative agility, and have the curiosity needed to thrive in fast-changing environments. According to the World Economic Forum, job loss between 2025 and 2030 will be more than offset by new roles, leading to a net gain of 78 million jobs. As some roles and tasks phase out, new ones emerge that require skills like AI and data fluency, creative thinking, resilience, and curiosity. Subscribe to the Daily newsletter. Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters If we don't protect and modernize the EIC pipeline, we risk widening the skill gaps and stalling the impact and ROI of AI solutions. EIC talent will be tomorrow's leaders, so we need to build pathways for them today. The demographic and leadership imperatives The talent pipeline is narrowing just as the pace of transformation is accelerating. U.S. birth rates are declining. Fewer 18-year-olds are entering the workforce. Higher education costs are skyrocketing, and many high school graduates are choosing two-year and technical degrees or trade jobs. That makes every EIC hire even more valuable. HR leaders help define the structure of the workforce and manage payroll—the largest line on the profit and loss statement—so where we invest matters. EIC roles are often the smartest entry point for workforce planning. We need to build AI-first cultures rooted in continuous learning, with roles that fuel business and personal growth. That means doubling down on equipping early-career talent with the skills, creativity, and adaptability to lead AI-powered organizations. And our succession pipelines must prioritize leadership capabilities like AI fluency, orchestration, and human-centered change management. That means focusing on these key steps: Reimagine strategic workforce planning As leaders, we must identify the skills AI won't replace and the skills that matter most to our businesses—from programming and UX design to collaboration, creative problem solving, and empathy. Then we should map those skills to evolving roles. For example, if AI handles research, an entry-level role could evolve into a prompt engineer or curator. Other future roles could include AI safety and ethics coordinators and AI agent trainers for front line workers. Design new rotations and exposure Companies that invest in internships build future-ready talent pipelines. Internships today are table stakes. To stand out, we need to build rotational programs, apprenticeships, and real-world experiences that give EIC hires exposure across the business. Reverse mentoring, for example, could give EIC talent a chance to connect directly with senior leaders, while giving those leaders a window into AI-native thinking. The goal is to retain top talent by creating a culture of growth, mobility, and connection. With clear goals, meaningful work, strong managers, and real learning experiences, EIC talent has the chance to thrive and drive innovation. At ServiceNow, 95.6% of our interns accepted our full-time offers in 2024, proof of meaningful investment. Embrace AI-first learning for growth and retention Retaining top talent, especially early-in-career talent, starts with listening followed by meaningful action. Sixty-five percent of EIC workers say they'd stay at least four years at a company if it offered robust development opportunities. We need to show EIC talent how they can grow, and design learning that matches their curiosity. EIC employees expect learning to be personalized, bite-sized, and built into the workflow. That's why we launched ServiceNow University—to train our employees and the broader technology ecosystem. It's working: EIC hires at ServiceNow have a 7% lower attrition rate in their first two years than their peers. The long game: Invest in young talent and AI Leaders don't need to decide between cutting costs and investing in the future. They can do both when they focus on transforming the workforce. Organizations that lead with intention—those that rethink roles, invest in AI enablement, and reimagine EIC talent—will attract the best minds and shape the next era of innovation. We all have a lot to learn in this new world, and we should evolve our strategies as we go. But EIC employees are essential. Their fluency with technology, drive to learn, and creative edge are exactly what we need to build the future. We can't afford to sideline them. Committing to EIC talent will require a lot of hard work and vision, but with the right strategy, it is possible. Jacqui Canney is chief people and AI enablement officer at ServiceNow.

Wall Street Journal
an hour ago
- Wall Street Journal
U.K. Consumers Sunnier as BOE Cuts Rates But Economic Fears Remain
U.K. consumers felt a little better about their finances this month as the Bank of England lowered borrowing costs, though sentiment remains weak amid wider economic turmoil. Consumer confidence rose two points to minus 17 in August from minus 19 in July, according to research group GfK's monthly index, published with the Nuremberg Institute for Market Decisions.