logo
Google may have to make changes to UK search engine, says watchdog

Google may have to make changes to UK search engine, says watchdog

Google may have to launch changes to its search engine in the UK and hand more power back to publishers, the competition regulator has warned.
The Competition and Markets Authority (CMA) has said it is looking at whether it needs to loosen Google's control of its search engine and allow publishers more influence over how their content is used.
Potential changes could see the regulator force Google to give internet users the option to use an alternative search engine.
The tech giant is the first company being targeted by the regulator under a new set of digital market laws.
CMA takes first steps to improve competition in search services in the UK.
We've proposed to designate Google with strategic market status under the new Digital Markets Competition Regime. https://t.co/hla7I5b56K
— Competition & Markets Authority (@CMAgovUK) June 24, 2025
Google accounts for more than 90% of searches in the UK, while it is also used by more than 200,000 UK businesses to reach customers.
Google said it would work 'constructively' with the CMA but highlighted that its plans presented 'challenges' to the business.
The CMA, which launched its investigation into Google in January, said it is minded to give the tech firm 'strategic market status', which would require it to abide by a number of rules over its conduct.
It could be forced to introduce new 'fair ranking' measures so users can compared its search results.
Measures could also include Google providing 'choice screens' for users so they can use alternative search services.
The regulator said publishers could also receive more control over how their content is used, including how or whether it is presented in AI-generated responses.
A final decision is set to be made by October following a consultation process.
Oliver Bethell, senior director of competition at Google, said: 'The CMA has today reiterated that 'strategic market status' does not imply that anti-competitive behaviour has taken place — yet this announcement presents clear challenges to critical areas of our business in the UK.
'We're concerned that the scope of the CMA's considerations remains broad and unfocused, with a range of interventions being considered before any evidence has been provided.
'The UK has historically benefited from early access to our latest innovations, but punitive regulations could change that.
'Proportionate, evidence-based regulation will be essential to preventing the CMA's roadmap from becoming a roadblock to growth in the UK.'
Sarah Cardell, chief executive of the CMA, said: 'Google search has delivered tremendous benefits – but our investigation so far suggests there are ways to make these markets more open, competitive and innovative.
'Today marks an important milestone in our implementation of the new Digital Markets Competition Regime in the UK.
'Alongside our proposed designation of Google's search activities, we have set out a roadmap of possible future action to improve outcomes for people and businesses in the UK.
'These targeted and proportionate actions would give UK businesses and consumers more choice and control over how they interact with Google's search services – as well as unlocking greater opportunities for innovation across the UK tech sector and broader economy.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic wins key US ruling on AI training in authors' copyright lawsuit
Anthropic wins key US ruling on AI training in authors' copyright lawsuit

Reuters

time32 minutes ago

  • Reuters

Anthropic wins key US ruling on AI training in authors' copyright lawsuit

June 24 (Reuters) - A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law. Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made "fair use", opens new tab of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model. Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a "central library" infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement. U.S. copyright law says that willful copyright infringement can justify statutory damages of up to $150,000 per work. An Anthropic spokesperson said the company was pleased that the court recognized its AI training was "transformative" and "consistent with copyright's purpose in enabling creativity and fostering scientific progress." The writers filed the proposed class action against Anthropic last year, arguing that the company, which is backed by Amazon (AMZN.O), opens new tab and Alphabet (GOOGL.O), opens new tab, used pirated versions of their books without permission or compensation to teach Claude to respond to human prompts. The proposed class action is one of several lawsuits brought by authors, news outlets and other copyright owners against companies including OpenAI, Microsoft (MSFT.O), opens new tab and Meta Platforms (META.O), opens new tab over their AI training. The doctrine of fair use allows the use of copyrighted works without the copyright owner's permission in some circumstances. Fair use is a key legal defense for the tech companies, and Alsup's decision is the first to address it in the context of generative AI. AI companies argue their systems make fair use of copyrighted material to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. Anthropic told the court that it made fair use of the books and that U.S. copyright law "not only allows, but encourages" its AI training because it promotes human creativity. The company said its system copied the books to "study Plaintiffs' writing, extract uncopyrightable information from it, and use what it learned to create revolutionary technology." Copyright owners say that AI companies are unlawfully copying their work to generate competing content that threatens their livelihoods. Alsup agreed with Anthropic on Monday that its training was "exceedingly transformative." "Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different," Alsup said. Alsup also said, however, that Anthropic violated the authors' rights by saving pirated copies of their books as part of a "central library of all the books in the world" that would not necessarily be used for AI training. Anthropic and other prominent AI companies including OpenAI and Meta Platforms have been accused of downloading pirated digital copies of millions of books to train their systems. Anthropic had told Alsup in a court filing that the source of its books was irrelevant to fair use. "This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use," Alsup said on Monday.

Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'
Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'

NBC News

time38 minutes ago

  • NBC News

Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'

Chris Schwegmann is getting creative with how artificial intelligence is being used in law. At Dallas-based boutique law firm Lynn Pinker Hurst & Schwegmann, he sometimes asks AI to channel Supreme Court Chief Justice John Roberts or Sherlock Holmes. Schwegmann said after uploading opposing counsel's briefs, he'll ask legal technology platform Harvey to assume the role of a legal mind like Roberts to see how the chief justice would think about a particular problem. Other times, he will turn to a fictional character like Holmes, unlocking a different frame of mind. 'Harvey, ChatGPT ... they know who those folks are, and can approach the problem from that mindset,' he said. 'Once we as lawyers get outside those lanes, when we are thinking more creatively involving other branches of science, literature, history, mythology, that sometimes generates some of the most interesting ideas that can then be put, using proper legal judgement, in a framework that works to solve a legal problem.' It's just one example of how smaller businesses are putting AI to work to punch above their weight, and new data shows there's an opportunity for much more implementation in the future. Only 24% of owners in the recent Small Business and Technology Survey from the National Federation of Independent Business said they are using AI, including ChatGPT, Canva and Copilot, in some capacity. Notably, 98% of those using it said AI has so far not impacted the number of employees at their firms. At his trial litigation firm of 50 attorneys, Schwegmann said AI is resolving work in days that would sometimes take weeks, and said the technology isn't replacing workers at the firm. It has freed up associate lawyers from doing 'grunt work,' he said, and also means more senior-level partners have the time to mentor younger attorneys because everyone has more time. The NFIB survey found AI use varied based on the size of the small business. For firms with employees in the single digits, uptake was at 21%. At firms with fifty or more workers, AI implementation was at nearly half of all respondents. 'The data show clearly that uptake for the smallest businesses lags substantially behind their larger competitors. ... With a little attention from all the relevant stakeholders, a more equal playing field is possible,' the NFIB report said. For future AI use, 63% of all small employers surveyed said the utilization of the technology in their industry in the next five years will be important to some degree; 12% said it will be extremely important and 15% said it will not be important at all. Some of the most common uses in the survey were for communications, marketing and advertising, predictive analysis and customer service. 'We still have the need for the independent legal judgment of our associate lawyers and our partners — it hasn't replaced them, it just augments their thinking,' Schwegmann said. 'It makes them more creative and frees their time to do what lawyers do best, which is strategic thought and creative problem solving.' The NFIB data echoes a recent survey from Reimagine Main Street, a project of Public Private Strategies Institute in partnership with PayPal. Reimagine surveyed nearly 1,000 small businesses with annual revenue between $25,000 and $50,000 and also found that a quarter had already started integrating AI into daily workflows. Schwegmann said at his firm, AI is helping to even the playing field. 'One of the things Harvey lets us do is review, understand and incorporate and respond much faster than we would prior to the use of these kinds of AI tools,' he said. 'No longer does a party have an advantage because they can paper you to death.'

Federal judge rules copyrighted books are fair use for AI training
Federal judge rules copyrighted books are fair use for AI training

NBC News

timean hour ago

  • NBC News

Federal judge rules copyrighted books are fair use for AI training

A federal judge has sided with Anthropic in a major copyright ruling, declaring that artificial intelligence developers can train on published books without authors' consent. The decision, filed Monday in the U.S. District Court for the Northern District of California, sets a precedent that training AI systems on copyrighted works constitutes fair use. Though it doesn't guarantee other courts will follow, Judge William Alsup's ruling marks the first of dozens of ongoing copyright lawsuits to give an answer on fair use in the context of generative AI. It's a question that's been raised by creatives across various industries for years since generative AI tools exploded into the mainstream, allowing users to easily produce art from models trained on copyrighted work — often without the human creator's knowledge or permission. AI companies have been hit with a slew of copyright lawsuits from media companies, music labels and authors since 2023. Artists have signed multiple open letters urging government officials and AI developers to constrain the unauthorized use of copyrighted works. In recent years, companies have also increasingly inked licensing deals with AI developers to dictate terms of use for their artists' works. Alsup on Monday ruled on a lawsuit filed by three authors — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — last August, who claimed that Anthropic ignored copyright protections when it pirated millions of books and digitized purchased books to feed into its large language models, which helped train them to generate human-like text responses. 'The copies used to train specific LLMs were justified as a fair use,' Alsup wrote in the ruling. 'Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.' His decision stated that Anthropic's use of the books to train its models, including versions of its flagship AI model Claude, was 'exceedingly transformative' enough to fall under fair use. Fair use, as defined by the Copyright Act, takes into account four factors: the purpose of the use, what kind of copyrighted work is used (creative works get stronger protection than factual works), how much of the work was used, and whether the use hurts the market value of the original work. 'We are pleased that the Court recognized that using 'works to train LLMs was transformative — spectacularly so,'' Anthropic said in a statement, quoting the ruling. 'Consistent with copyright's purpose in enabling creativity and fostering scientific progress, 'Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.'' Bartz and Johnson did not immediately respond to requests for comment. Graeber declined to comment. Alsup noted, however, that all of the authors' works contained 'expressive elements' earning them stronger copyright protection, which is a factor that points against fair use, although not enough to sway the overall ruling. He also added that while making digital copies of purchased books was fair use, downloading pirated copies for free did not constitute fair use. But aside from the millions of pirated copies, Alsup wrote, copying entire works to train AI models was 'especially reasonable' because the models didn't reproduce those copies for public access, and doing so 'did not and will not displace demand' for the original books. His ruling stated that although AI developers can legally train AI models on copyrighted works without permission, they should obtain those works through legitimate means that don't involve pirating or other forms of theft. Despite siding with the AI company on fair use, Alsup wrote that Anthropic will still face trial for the pirated copies it used to create its massive central library of books used to train AI. 'That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft,' Alsup wrote, 'but it may affect the extent of statutory damages.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store