logo
The AI copyright standoff continues - with no solution in sight

The AI copyright standoff continues - with no solution in sight

Yahoo02-06-2025
The fierce battle over artificial intelligence (AI) and copyright - which pits the government against some of the biggest names in the creative industry - returns to the House of Lords on Monday with little sign of a solution in sight.
A huge row has kicked off between ministers and peers who back the artists, and shows no sign of abating.
It might be about AI but at its heart are very human issues: jobs and creativity.
It's highly unusual that neither side has backed down by now or shown any sign of compromise; in fact if anything support for those opposing the government is growing rather than tailing off.
This is "unchartered territory", one source in the peers' camp told me.
The argument is over how best to balance the demands of two huge industries: the tech and creative sectors.
More specifically, it's about the fairest way to allow AI developers access to creative content in order to make better AI tools - without undermining the livelihoods of the people who make that content in the first place.
What's sparked it is the uninspiringly-titled Data (Use and Access) Bill.
This proposed legislation was broadly expected to finish its long journey through parliament this week and sail off into the law books.
Instead, it is currently stuck in limbo, ping-ponging between the House of Lords and the House of Commons.
The bill states that AI developers should have access to all content unless its individual owners choose to opt out.
Nearly 300 members of the House of Lords disagree.
They think AI firms should be forced to disclose which copyrighted material they use to train their tools, with a view to licensing it.
Sir Nick Clegg, former president of global affairs at Meta, is among those broadly supportive of the bill, arguing that asking permission from all copyright holders would "kill the AI industry in this country".
Those against include Baroness Beeban Kidron, a crossbench peer and former film director, best known for making films such as Bridget Jones: The Edge of Reason.
She says ministers would be "knowingly throwing UK designers, artists, authors, musicians, media and nascent AI companies under the bus" if they don't move to protect their output from what she describes as "state sanctioned theft" from a UK industry worth £124bn.
She's asking for an amendment to the bill which includes Technology Secretary Peter Kyle giving a report to the House of Commons about the impact of the new law on the creative industries, three months after it comes into force, if it doesn't change.
Mr Kyle also appears to have changed his views about UK copyright law.
He said copyright law was once "very certain", but is now "not fit for purpose".
Perhaps to an extent both those things are true.
The Department for Science, Innovation and Technology say that they're carrying out a wider consultation on these issues and will not consider changes to the Bill unless they're completely satisfied that they work for creators.
If the "ping pong" between the two Houses continues, there's a small chance the entire bill could be shelved; I'm told it's unlikely but not impossible.
If it does, some other important elements would go along with it, simply because they are part of the same bill.
It also includes proposed rules on the rights of bereaved parents to access their children's data if they die, changes to allow NHS trusts to share patient data more easily, and even a 3D underground map of the UK's pipes and cables, aimed at improving the efficiency of roadworks (I told you it was a big bill).
There is no easy answer.
Here's how it all started.
Initially, before AI exploded into our lives, AI developers scraped enormous quantities of content from the internet, arguing that it was in the public domain already and therefore freely available.
We are talking about big, mainly US, tech firms here doing the scraping, and not paying for anything they hoovered up.
Then, they used that data to train the same AI tools now used by millions to write copy, create pictures and videos in seconds.
These tools can also mimic popular musicians, writers, artists.
For example, a recent viral trend saw people merrily sharing AI images generated in the style of the Japanese animation firm Studio Ghibli.
The founder of that studio meanwhile, had once described the use of AI in animation as "an insult to life itself". Needless to say, he was not a fan.
There has been a massive backlash from many content creators and owners including household names like Sir Elton John, Sir Paul McCartney and Dua Lipa.
They have argued that taking their work in this way, without consent, credit or payment, amounted to theft. And that artists are now losing work because AI tools can churn out similar content freely and quickly instead.
Sir Elton John didn't hold back in a recent interview with the BBC's Laura Kuenssberg.
He argued that the government was on course to "rob young people of their legacy and their income", and described the current administration as "absolute losers".
Others though point out that material made by the likes of Sir Elton is available worldwide.
And if you make it too hard for AI companies to access it in the UK they'll simply do it elsewhere instead, taking much needed investment and job opportunities with them.
Two opposing positions, no obvious compromise.
Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
Elton John and Dua Lipa seek protection from AI
Artists release silent album in protest against AI using their work
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI Employee Share Sale Could Value Firm at $500 Billion
OpenAI Employee Share Sale Could Value Firm at $500 Billion

Yahoo

time22 minutes ago

  • Yahoo

OpenAI Employee Share Sale Could Value Firm at $500 Billion

OpenAI, backed by Microsoft (MSFT, Financials), is in early talks for an employee share sale that could value the artificial intelligence firm at about $500 billion, a source familiar with the matter said. The deal would let current and former employees sell several billion dollars' worth of shares ahead of a possible initial public offering; it would mark a significant increase from OpenAI's current $300 billion valuation. Warning! GuruFocus has detected 7 Warning Sign with MSFT. The company's flagship product, ChatGPT, has driven rapid growth; revenue doubled in the first seven months of the year to an annualized $12 billion and is expected to reach $20 billion by year-end, according to the source. Weekly active users climbed to about 700 million from 400 million in February. The proposed sale follows a $40 billion primary funding round earlier this year, led by SoftBank Group, which committed $22.5 billion to the round; the rest of the funding was raised at a $300 billion valuation. Existing investors, including Thrive Capital, are in talks to participate in the share sale. The transaction would come as competition for AI talent intensifies; tech giants like Meta Platforms (META, Financials) are making multibillion-dollar investments to poach executives and researchers. Private firms such as ByteDance, Databricks and Ramp have also used secondary share sales to refresh valuations and reward long-term employees. OpenAI is planning a corporate restructuring to move away from its capped-profit model, which could pave the way for a future IPO; the company has said an offering would come only when market conditions are right. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

OpenAI's $500 Billion Power Play: Is This the Hottest Deal in Tech Right Now?
OpenAI's $500 Billion Power Play: Is This the Hottest Deal in Tech Right Now?

Yahoo

time22 minutes ago

  • Yahoo

OpenAI's $500 Billion Power Play: Is This the Hottest Deal in Tech Right Now?

OpenAI is weighing a massive secondary stock sale that could push its valuation to around $500 billion, according to people familiar with the matter. The deal still in early discussions would allow current and former employees to cash out some of their shares, with existing investors like Thrive Capital reportedly circling. If completed, this would mark a roughly 67% jump from OpenAI's previous $300 billion valuation tied to a $40 billion raise, one of the largest in private tech history. The move isn't just about rewarding staff it's also a strategic play to retain top talent in the face of fierce poaching from rivals like Meta, which has dangled nine-figure packages to lure researchers away. Warning! GuruFocus has detected 7 Warning Sign with MSFT. The timing is no coincidence. OpenAI recently secured $8.3 billion in a second tranche of that same $40 billion round, which one source said was oversubscribed by about five times. This fresh capital comes as the company doubles down on product and platform expansion. ChatGPT usage is climbing fast reportedly hitting 700 million weekly active users while the app now processes over 3 billion messages per day. OpenAI is also preparing its next major release, GPT-5, which is undergoing internal testing. While no official launch date has been announced, the model is expected to reinforce OpenAI's lead amid growing competition. On the hardware front, the company is moving ahead with a nearly $6.5 billion all-stock acquisition of a device startup co-founded by Apple design veteran Jony Ive. But the company isn't without friction points. Key investors are still in discussions over how OpenAI should be structured going forward including Microsoft (NASDAQ:MSFT), which has poured in more than $13 billion. At the heart of those talks is Microsoft's stake and long-term access to OpenAI's tech, with the current deal set to run through 2030. As OpenAI races to define its next chapter spanning AI platforms, consumer hardware, and employee retention investors are left with a familiar question: how much more upside is still on the table? This article first appeared on GuruFocus. Melden Sie sich an, um Ihr Portfolio aufzurufen.

Meta bans millions of WhatsApp accounts linked to scam operations
Meta bans millions of WhatsApp accounts linked to scam operations

The Hill

time23 minutes ago

  • The Hill

Meta bans millions of WhatsApp accounts linked to scam operations

Meta took down 6.8 million WhatsApp accounts tied to scam operations on Tuesday after victims reported financial fraud schemes. The company said many of the scam sources were based in Southeast Asia at criminal scam centers. 'Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centers were able to operationalize them,' Meta said in a Tuesday release. 'These scam centers typically run many scam campaigns at once — from cryptocurrency investments to pyramid schemes. There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings,' they wrote. In an effort to ensure users are protected, the company said it would flag when people were added to group messages by someone who isn't in their contact list and urge individuals to pause before engaging with unfamiliar messages where they're encouraged to communicate on other social platforms. 'Scams may start with a text message or on a dating app, then move to social media, private messaging apps and ultimately payment or crypto platforms,' Meta said. 'In the course of just one scam, they often try to cycle people through many different platforms to ensure that any one service has only a limited view into the entire scam, making it more challenging to detect,' the company added. The Tuesday release highlighted an incident with Cambodian users urging people to enlist in a rent a scooter pyramid scheme with an initial text message generated by ChatGPT. The message contained a link to a WhatsApp chat which redirected the target to Telegram where they were told to like TikTok videos. 'We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole. These messages offered recipients high salaries for trivial tasks — such as liking social media posts — and encouraged them to recruit others,' OpenAI wrote in their June report focused on disrupting malicious artificial intelligence efforts. 'The operation appeared highly centralized and likely originated from Cambodia. Using AI-powered translation tools, we were able to investigate and disrupt the campaign's use of OpenAI services swiftly,' the company added. The Federal Trade Commission has reported a steady increase in social media fraud. The agency said more money was reported lost to fraud originating on social media than any other method of contact from January 2021 to June 2023 — with losses totaling $2.7 billion.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store