
Getty Images and Stability AI face off in British copyright trial that will test AI industry
LONDON (AP) — Getty Images is facing off against artificial intelligence company Stability AI in a London courtroom for the first major copyright trial of the generative AI industry.
Opening arguments before a judge at the British High Court are scheduled for Monday. The trial could last for three weeks.
Stability, based in London, owns a widely used AI image-making tool that sparked enthusiasm for the instant creation of AI artwork and photorealistic images upon its release in August 2022. OpenAI introduced its surprise hit chatbot ChatGPT three months later.
Seattle-based Getty has argued that the development of the AI image maker, called Stable Diffusion, involved 'brazen infringement' of Getty's photography collection 'on a staggering scale."
Tech companies have long argued that 'fair use' or 'fair dealing' legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Getty was among the first to challenge those practices when it filed copyright infringement lawsuits in the United States and the United Kingdom in early 2023.
'What Stability did was inappropriate,' Getty CEO Craig Peters told The Associated Press in 2023. He said creators of intellectual property should be asked for permission before their works are fed into AI systems rather than having to participate in an 'opt-out regime."
Stability has argued that the case doesn't belong in the United Kingdom because the training of the AI model technically happened elsewhere, on computers run by U.S. tech giant Amazon.
Similar cases in the U.S. have not yet gone to trial.
Stable Diffusion's roots trace to Germany, where computer scientists at Ludwig Maximilian University of Munich worked with the New York-based tech company Runway to develop the original algorithms. The university researchers credited Stability AI for providing the servers that trained the models, which require large amounts of computing power.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
27 minutes ago
- Axios
Amid backlash, Tesla remained resilient in Texas
Even as Tesla deliveries plunged nationally this year amid Elon Musk's very visible (if short-lived) alliance with President Trump, there was at least one state where Tesla registrations were up: Texas. Why it matters: The registration data, obtained by Axios through public information requests, indicates loyalty to the brand in its home base, including Texas' large urban and suburban counties. The depth of conservatives' enthusiasm for Musk's automobiles now faces a major test amid the absolute meltdown last week between the Tesla CEO and the president. By the numbers: Texans registered 12,918 new Teslas in the first three months of 2025, a period when Musk, who contributed more than $250 million to a pro-Trump super PAC during the 2024 election campaign, was enmeshed in the Trump administration as the overseer of DOGE, the president's cost-cutting initiative. Over the same period in 2024, Texans registered 10,679 Teslas. That's a 21% increase year over year. The intrigue: The spike in Texas registrations came as Tesla was flailing elsewhere. Tesla's vehicle deliveries plunged 13% globally in the first quarter of 2025 (336,681 electric vehicles) compared with Q1 2024 (386,810). Tesla vehicles were torched at showrooms and the brand's reputation cratered. Zoom in: Tesla saw year-over-year improvements in its sales in some of the most populous Texas counties. In Travis County, new Tesla registrations grew from 1,369 in the first quarter of 2024 to 1,424 during the first quarter of 2025. In Harris County, they grew from 1,526 to 1,837 during the same period. Tesla registration grew from 1,316 to 1,546 in Collin County and from 990 to 1,146 in Dallas County. In Bexar County, registrations grew from 631 to 664. What they're saying:"It's homegrown pride," is how Matt Holm, president and founder of the Tesla Owners Club of Austin, explains the car company's resilience to Axios. "And regardless of all the drama going on these days, people can differentiate between the product and everything else going on, and it's just a great product." "Elon has absolutely and irreversibly blown up bridges to some potential customers," says Alexander Edwards, president of California-based research firm Strategic Vision, which has long surveyed the motivations of car buyers. "People who bought Teslas for environmental friendliness, that's pretty much gone," Edwards tells Axios. Yes, but: The company had been enjoying an increasingly positive reputation among more conservative consumers. Musk was viewed favorably by 80% of Texas Republicans polled by the Texas Politics Project in April — and unfavorably by 83% of Democrats. In what now feels like a political lifetime ago, Trump himself even promoted Teslas by promising to buy one in support of Musk earlier this year. "In some pockets, like Austin, you have that tech group that loves what Tesla has to offer, can do some mental gymnastics about Musk, and looks at Rivian and says that's not what I want or might be priced out," Edwards says. Between the lines:"Being in the state of Texas, you're naturally conditioned to think you're better than everyone else in the U.S. And when you buy a Tesla" — a status symbol — "that's what you're saying. It doesn't surprise me that there's an increase in sales" in Texas, Edwards says. Plus: Tesla's resilience in Texas could have practical reasons as well, Edwards says. Texas homes — as opposed to, say, apartments in cities on the East Coast — are more likely to have a garage to charge a car in, he adds. What's next: Musk said late last month that Tesla was experiencing a "major rebound in demand" — without providing specifics. But that was before things went absolutely haywire with Trump and Tesla stock took a bath last week.
Yahoo
30 minutes ago
- Yahoo
This AI Company Wants Washington To Keep Its Competitors Off the Market
Dario Amodei, CEO of the artificial intelligence company Anthropic, published a guest essay in The New York Times Thursday arguing against a proposed 10-year moratorium on state AI regulation. Amodei argues that a patchwork of regulations would be better than no regulation whatsoever. Skepticism is warranted whenever the head of an incumbent firm calls for more regulation, and this case is no different. If Amodei gets his way, Anthropic would face less competition—to the detriment of AI innovation, AI security, and the consumer. Amodei's op-ed came in a response to a provision of the so-called One Big Beautiful Bill Act, which would prevent any states, cities, and counties from enforcing any regulation that specifically targets AI models, AI systems, or automated decision systems for 10 years. Senate Republicans have amended the clause from a simple requirement to a condition for receiving federal broadband funds, in order to comply with the Byrd Rule, which in Politico's words "blocks anything but budgetary issues from inclusion in reconciliation." Amodei begins by describing how, in a recent stress test conducted at his company, a chatbot threatened an experimenter to forward evidence of his adultery to his wife unless he withdrew plans to shut the AI down. The CEO also raises more tangible concerns, such as reports that a version of Google's Gemini model is "approaching a point where it could help people carry out cyberattacks." Matthew Mittelsteadt, a technology fellow at the Cato Institute, tells Reason that the stress test was "very contrived" and that "there are no AI systems where you must prompt it to turn it off." You can just turn it off. He also acknowledges that, while there is "a real cybersecurity danger [of] AI being used to spot and exploit cyber-vulnerabilities, it can also be used to spot and patch" them. Outside of cyberspace and in, well, actual space, Amodei sounds the alarm that AI could acquire the ability "to produce biological and other weapons." But there's nothing new about that: Knowledge and reasoning, organic or artificial—ultimately wielded by people in either case—can be used to cause problems as well as to solve them. An AI that can model three-dimensional protein structures to create cures for previously untreatable diseases can also create virulent, lethal pathogens. Amodei recognizes the double-edged nature of AI and says voluntary model evaluation and publication are insufficient to ensure that benefits outweigh costs. Instead of a 10-year moratorium, Amodei calls on the White House and Congress to work together on a transparency standard for AI companies. In lieu of federal testing standards, Amodei says state laws should pick up the slack without being "overly prescriptive or burdensome." But that caveat is exactly the kind of wishful thinking Amodei indicts proponents of the moratorium for: Not only would 50 state transparency laws be burdensome, says Mittelsteadt, but they could "actually make models less legible." Neil Chilson of the Abundance Institute also inveighed against Amodei's call for state-level regulation, which is much more onerous than Amodei suggests. "The leading state proposals…include audit requirements, algorithmic assessments, consumer disclosures, and some even have criminal penalties," Chilson tweeted, so "the real debate isn't 'transparency vs. nothing,' but 'transparency-only federal floor vs. intrusive state regimes with audits, liability, and even criminal sanctions.'" Mittelsteadt thinks national transparency regulation is "absolutely the way to go." But how the U.S. chooses to regulate AI might not have much bearing on Skynet-doomsday scenarios, because, while America leads the way in AI, it's not the only player in the game. "If bad actors abroad create Amodei's theoretical 'kill everyone bot,' no [American] law will matter," says Mittelsteadt. But such a law can "stand in the way of good actors using these tools for defense." Amodei is not the only CEO of a leading AI company to call for regulation. In 2023, Sam Altman, co-founder and then-CEO of Open AI, called on lawmakers to consider "intergovernmental oversight mechanisms and standard-setting" of AI. In both cases and in any others that come along, the public should beware of calls for AI regulation that will foreclose market entry, protect incumbent firms' profits from being bid away by competitors, and reduce the incentives to maintain market share the benign way: through innovation and product differentiation. The post This AI Company Wants Washington To Keep Its Competitors Off the Market appeared first on


E&E News
30 minutes ago
- E&E News
Committee explores nuclear solutions to AI demand
House Science, Space and Technology Committee lawmakers will meet this week to discuss how nuclear energy could help meet a projected surge in demand from artificial intelligence operations. The Energy Subcommittee hearing — to be led by Chair Randy Weber (R-Texas) — continues Republicans' early focus and significant concern regarding supply and demand in the 119th Congress. They believe baseload energy sources, such as nuclear and fossil fuels, need to be built at a rapid pace to offset a surge in intermittent, renewable energy generation that could put grid reliability at risk. Indeed, transmission providers are forecasting an 8.2 percent growth in electricity load over the next five years primarily due to AI data center proliferation. That's equivalent to hooking up nearly 50 million homes to the grid by 2029. Advertisement But whether nuclear energy can actually meet that demand remains a point of debate among energy and policy experts.