
Getty argues its UK copyright case does not threaten AI
Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry.
Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs.
Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion.
Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights.
Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas".
"Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said.
In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry".
Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights.
"It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court.
She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment."
Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago.
Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material.
Prominent figures including Elton John have called for greater protections for artists.
Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI.
"Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said.
Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

News.com.au
an hour ago
- News.com.au
‘Complete collapse': Bombshell report into AI accuracy indicates your job is probably safe
The latest form of cutting-edge artificial intelligence technology suffers 'fundamental limitations' that result in a 'complete accuracy collapse', a bombshell report from Apple has revealed. Researchers from the tech giant have published a paper with their findings, which cast doubt on the true potential of AI as billions of dollars are poured into developing and rolling out new systems. The team put large reasoning models, an advanced version of AI, used in platforms like DeepSeek and Claude, through a series of puzzle challenges ranging from simple to complex. They also tested large language models, which platforms like ChatGPT are built on. Large language model AI systems fared better than large reasoning models with fairly standard tasks, but both fell flat when confronting more complex challenges, the paper revealed. Researchers also found that large reasoning models began 'reducing their reasoning effort' as they struggled to perform, which was 'particularly concerning'. 'Upon approaching a critical threshold – which closely corresponds to their accuracy collapse point – models counterintuitively begin to reduce their reasoning effort despite increasing problem difficulty,' the paper read. The advancement of AI, based on current approaches, might've reached its limit for now, the findings suggested. Niusha Shafiabady, an associate professor of computational intelligence at Australian Catholic University and director of the Women in AI for Social Good lab, said 'expecting AI to be a magic wand' is a mistake. 'I have been talking about the realistic expectations about the AI models since 2024,' Dr Shafiabady said. 'When AI models face countless interactions with the world, it is not possible to investigate and control every single problem that could happen. That is why things could get out of hand or out of control.' Gary Marcus, a leading voice on AI and six-time author, delivered a savage analysis of the Apple paper on his popular Substack, describing it as 'pretty devastating'. 'Anybody who thinks [large language models] are a direct route to the [artificial generative intelligence] that could fundamentally transform society for the good is kidding themselves,' Dr Marcus wrote. Dr Marcus then took to X to declare that the hype around AI has become 'a giant game of bait and switch'. 'The bait: we are going to make an AI that can solve any problem an expert human could solve. It's gonna transform the whole world,' Dr Marcus wrote. 'The switch: what we have actually made is fun and kind of amazing in its own way but rarely reliable and often makes mistakes – but ordinary people makes mistakes too.' In the wake of the paper's release, Dr Marcus has re-shared passionate defences of AI shared to X by evangelists defending the accuracy flaws that have been exposed. 'Imagine if calculator designers made a calculator that worked 80 per cent correctly and said 'naah, it's fine, people make mistakes too',' Mr Marcus quipped. Questions about the quality of large language and large reasoning models aren't new. For example, when released in April, OpenAI described its new o3 and o4-mini models as its 'smartest and most capable' yet, trained to 'think for longer before responding'. 'The combined power of state-of-the-art reasoning with full tool access translates into significantly stronger performance across academic benchmarks and real-world tasks, setting a new standard in both intelligence and usefulness,' the company's announcement read. But testing by prestigious American university MIT revealed the o3 model was incorrect 51 per cent of the time, while o4-mini performed even worse with an error rate of 79 per cent. Truth and accuracy undermined Apple recently suspended its news alert feature on iPhones, powered by AI, after users reported significant accuracy errors. Among the jaw-dropping mistakes was an alert that tennis icon Rafael Nadal had come out as gay, alleged United Healthcare CEO shooter Luigi Mangione had died by suicide in prison, and a winner had been crowned at the World Darts Championship hours before competition began. Research conducted by the BBC found a litany of errors across other AI assistants providing information about news events, including Google's Gemini, OpenAI's ChatGPT and Microsoft's CoPilot. It found 51 per cent of all AI-generated answers to queries about the news had 'significant issues' of some form. When looking at how its own news coverage was being manipulated, the BBC found 19 per cent of answers citing its content were factually incorrect. And in 13 per cent of cases, quotes said to be contained within BBC stories had either been altered or entirely fabricated. Meanwhile, a newspaper in Chicago was left red-faced recently after it published a summer reading list featuring multiple books that don't exist, thanks to the story copy being produced by AI. And last year, hundreds of people who lined the streets of Dublin were disappointed when it turned out the Halloween parade advertised on an events website had been invented. Google was among the first of the tech giants to roll out AI, summarising search results relying on a large language model – with some hilarious and possibly dangerous results. Among them were suggestions to add glue to pizza, eat a rock a day to maintain health, take a bath with a toaster to cope with stress, drink two litres of urine to help pass kidney stones and chew tobacco to reduce the risk of cancer. Jobs might be safe – for now Ongoing issues with accuracy might have some companies thinking twice about going all-in on AI when it comes to substituting their workforces. So too might some recent examples of the pitfalls of people being replaced with computers. Buy now, pay later platform Klarna shed more than 1000 people from its global workforce as part of a dramatic shift to AI resourcing, sparked by its partnership with OpenAI, forged in 2023. But last month, the Swedish firm conceded its strong reliance on AI customer service chatbots – which saw its employee count almost halved in two years – had created quality issues and led to a slump in customer satisfaction. Realising most customers prefer interacting with a human, Klarna has begun hiring back actual workers. Software company Anysphere faced a customer backlash in April when its AI-powered support chatbot went rogue, kicking users out of the code-editing platform Cursor and delivering incorrect information. It then seemingly 'created' a new user policy out of thin air to justify the logouts – that the platform couldn't be used across multiple computers. Cursor saw a flood of customer cancellations as a result. AI adviser and former Google chief decision scientist Casse Kozyrkov took to LinkedIn to share her thoughts on the saga, dubbing it a 'viral hot mess'. 'It failed to tell users that its customer support 'person' Sam is actually a hallucinating bot,' Ms Kozyrkov wrote. 'It's only going to get worse with AI agents.' Many companies pushing AI insist the technology is improving swiftly, but a host of experts aren't convinced its hype matches its ability. Earlier this year, the Association for the Advancement of Artificial Intelligence surveyed two dozen AI specialists and some 400 of the group's members and found a surprising level of pessimism about the potential of the technology. Sixty per cent of those probed don't believe problems with factuality and trustworthiness 'would soon be solved', it found. Issues of accuracy and reliability are important, not just for growing public trust in AI, but for preventing unintended consequences in the future, AAAI president Francesca Rossi wrote in a report about the survey. 'We all need to work together to advance AI in a responsible way, to make sure that technological progress supports the progress of humanity and is aligned to human values,' Ms Rossi said. Projects stalled or abandoned Embarrassing and potentially costly issues like these are contributing to a backtrack, with analysis by S&P Global Market Intelligence showing the share of American and European companies abandoning their AI initiatives rising to 42 per cent this year from 17 per cent in 2024. And a study released last month by consulting firm Roland Berger found a mammoth investment in AI technology wasn't translating to useful outcomes for many businesses. Spending on AI by corporates in Europe hit an estimated US$14 billion (AU$21.4 billion) in 2024, but just 27 per cent were able to fully integrate the technology into their operations or workflows, the research revealed. 'Asked about the key challenges involved in implementing AI projects, 28 per cent of respondents cited issues with data, 25 per cent referenced the complexity of integrating AI use cases, and 15 per cent mentioned the difficulty of finding enough AI and data experts,' the study found. Those findings were mirrored in an IBM survey, which found one-in-four AI projects delivered the returns they promised. Dr Shafiabady said there are a few reasons for problems facing AI, like those identified in Apple's research. 'When dealing with highly complex problems, these types of complex AI models can't give an accurate solution. One of the reasons why is the innate nature of algorithms,' Dr Shafiabady said. 'Models are built on mathematical computational iterative algorithms that are coded into computers to be processed. When tasks get very complicated, these algorithms won't necessarily follow the logical reasoning and will lose track of them. 'Sometimes when the problem gets harder, all the computing power and time in the world won't enhance AI model's performance. Sometimes when it hits very difficult tasks, it fails because it has learnt the example rather than the hidden patterns in the data. 'And sometimes the problem gets complicated, and a lot of computation resource and time is wasted over exploring the wrong solutions and there is not enough 'energy' left to reach the right solution.'

Sky News AU
3 hours ago
- Sky News AU
More money to Moscow than Kyiv: Australia buying billions in fuel using Russian crude despite sanctions against Kremlin
Australia's dependence on imported oil—much of it refined from Russian crude—has handed the Kremlin billions of dollars in tax revenue from exporters, according to new research. While Australia has proudly pledged $1.5 billion in aid to support Ukraine against Russia's invasion, a new report found the country has funnelled even more money to the Kremlin. Data from the Europe-based Centre for Research on Energy and Clean Air (CREA) estimated Australia has imported more than AU$3.7 billion worth of oil products derived from Russian crude. The oil is refined overseas—including at the Jamnagar Refinery in India, which has not sanctioned Russian crude—before being legally imported into the Australian market. This would have handed Russian President Vladimir Putin about AU$1.8 billion in tax revenue, according CREA. CREA's EU Russia Analyst Vaibhav Raghunandan called it 'a significant failing of Western sanctions', exposing a glaring loophole that lets Russia bypass restrictions. 'Refineries in non-sanctioning countries buying Russian oil are… taking comfort in the knowledge that they will face no action from Western countries,' he said. Mr Raghunandan said countries like Australia 'look away and continue to import refined products', indirectly funding Russia's invasion of Ukraine. The Australian Strategic Policy Institute (ASPI) warned this 'policy blind spot' was 'actively undermining our credibility' as a nation. ASPI's Director of National Security Programs John Coyne called Australia's dependence on Russian-linked fuel 'a serious national security failure and a strategic contradiction'. 'We cannot claim to support Ukraine and uphold a values-based foreign policy while simultaneously fuelling our economy with Russian-linked petroleum,' he told Sky News. 'According to reports, Australia has sent more tax dollars to the Kremlin through these imports than we've provided to Ukraine in aid. That's indefensible.' Resources Minister Madeleine King did not respond to questions asked by Sky News about the issue. Due to fuel security issues, Australia has continued to rely on imports in order to deliver its national energy demands. According to the latest Australian Petroleum Statistics, there are just 56 days of fuel supply on shore, well below the International Energy Agency's 90-day requirement. Mr Coyne said Australia's critically low domestic fuel reserves was a vulnerability. 'Our domestic fuel reserves remain critically low… Australia is dangerously exposed to global supply disruptions, conflict, or coercion,' he said. 'We are one maritime chokepoint or geopolitical flare-up away from a fuel crisis.' Mr Coyne urged the federal government to take immediate action to bolster sovereign energy capability and national resilience. 'The era of cheap energy and blind reliance on market forces to solve national problems is over. Governments can no longer outsource resilience,' he said. 'Right now, Australia is precariously vulnerable to any disruption in the global liquid fuel supply chain and that vulnerability cuts across defence, emergency response, agriculture, and logistics.' He said to address the problem, the Albanese government must close the refining loophole in its sanctions regime and improve sovereign refining and storage capacity. He added the government needed to 'embed fuel security in a coordinated, whole-of-government resilience strategy'. 'This is not just about economics. It's about whether Australia can function, respond and defend itself under stress,' Mr Coyne said.


West Australian
4 hours ago
- West Australian
Shire of Shark Bay finishes building 12 social units for seniors, thanks to $4 million State Government grant
A $4 million State Government capital grant has been used to build a dozen social homes for seniors in the coastal Gascoyne town of Denham. The Shire of Shark Bay recently completed the 12 properties under the Shark Bay Aged Housing Project, which aims to meet the growing need for well-located, appropriate and affordable housing for seniors. The 12 one-bedroom independent living units were built by Carnarvon-based Northern Aspect Construction next to the recently established community hub. Housing and Works Minister John Carey said the units would allow current and future tenants to age at home and enjoy quality amenities and support services in Shark Bay. 'Since July 2021, our government has executed more than $200 million in capital grants contracts,' he said. 'The delivery of these homes is a great example of State and local governments working together to get a positive outcome for the community.' Overall, the State Government has invested $5.1 billion into housing and homelessness measures, adding more than 3000 social homes across WA. More than 1000 are currently under contract or construction.