Latest news with #Anthropic


Atlantic
10 minutes ago
- Business
- Atlantic
What Two Judicial Rulings Mean for the Future of Generative AI
Should tech companies have free access to copyrighted books and articles for training their AI models? Two judges recently nudged us toward an answer. More than 40 lawsuits have been filed against AI companies since 2022. The specifics vary, but they generally seek to hold these companies accountable for stealing millions of copyrighted works to develop their technology. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation. In each case, the judges decided that the tech companies were engaged in 'fair use' when they trained their models with authors' books. Both judges said that the use of these books was 'transformative'—that training an LLM resulted in a fundamentally different product that does not directly compete with those books. (Fair use also protects the display of quotations from books for purposes of discussion or criticism.) At first glance, this seems like a substantial blow against authors and publishers, who worry that chatbots threaten their business, both because of the technology's ability to summarize their work and its ability to produce competing work that might eat into their market. (When reached for comment, Anthropic and Meta told me they were happy with the rulings.) A number of news outlets portrayed the rulings as a victory for the tech companies. Wired described the two outcomes as ' landmark ' and ' blockbuster.' But in fact, the judgments are not straightforward. Each is specific to the particular details of each case, and they do not resolve the question of whether AI training is fair use in general. On certain key points, the two judges disagreed with each other—so thoroughly, in fact, that one legal scholar observed that the judges had 'totally different conceptual frames for the problem.' It's worth understanding these rulings, because AI training remains a monumental and unresolved issue—one that could define how the most powerful tech companies are able to operate in the future, and whether writing and publishing remain viable professions. So, is it open season on books now? Can anyone pirate whatever they want to train for-profit chatbots? Not necessarily. When preparing to train its LLM, Anthropic downloaded a number of 'pirate libraries,' collections comprising more than 7 million stolen books, all of which the company decided to keep indefinitely. Although the judge in this case ruled that the training itself was fair use, he also ruled that keeping such a 'central library' was not, and for this, the company will likely face a trial that determines whether it is liable for potentially billions of dollars in damages. In the case against Meta, the judge also ruled that the training was fair use, but Meta may face further litigation for allegedly helping distribute pirated books in the process of downloading—a typical feature of BitTorrent, the file-sharing protocol that the company used for this effort. (Meta has said it 'took precautions' to avoid doing so.) Piracy is not the only relevant issue in these lawsuits. In their case against Anthropic, the authors argued that AI will cause a proliferation of machine-generated titles that compete with their books. Indeed, Amazon is already flooded with AI-generated books, some of which bear real authors' names, creating market confusion and potentially stealing revenue from writers. But in his opinion on the Anthropic case, Judge William Alsup said that copyright law should not protect authors from competition. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote. In his ruling on the Meta case, Judge Vince Chhabria disagreed. He wrote that Alsup had used an 'inapt analogy' and was 'blowing off the most important factor in the fair use analysis.' Because anyone can use a chatbot to bypass the process of learning to write well, he argued, AI 'has the potential to exponentially multiply creative expression in a way that teaching individual people does not.' In light of this, he wrote, 'it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars' while damaging the market for authors' work. To determine whether training is fair use, Chhabria said that we need to look at the details. For instance, famous authors might have less of a claim than up-and-coming authors. 'While AI-generated books probably wouldn't have much of an effect on the market for the works of Agatha Christie, they could very well prevent the next Agatha Christie from getting noticed or selling enough books to keep writing,' he wrote. Thus, in Chhabria's opinion, some plaintiffs will win cases against AI companies, but they will need to show that the market for their particular books has been damaged. Because the plaintiffs in the case against Meta didn't do this, Chhabria ruled against them. In addition to these two disagreements is the problem that nobody—including AI developers themselves—fully understands how LLMs work. For example, both judges seemed to underestimate the potential for AI to directly quote copyrighted material to users. Their fair-use analysis was based on the LLMs' inputs — the text used to train the programs—rather than outputs that might be infringing. Research on AI models such as Claude, Llama, GPT-4, and Google's Gemini has shown that, on average, 8 to 15 percent of chatbots' responses in normal conversation are copied directly from the web, and in some cases responses are 100 percent copied. The more text an LLM has 'memorized,' the more it can potentially copy and paste from its training sources without anyone realizing it's happening. OpenAI has characterized this as a 'rare bug,' and Anthropic, in another case, has argued that 'Claude does not use its training texts as a database from which preexisting outputs are selected in response to user prompts.' But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much more of its training text than was previously thought, including near-exact copies of books such as Harry Potter and the Sorcerer's Stone and 1984. That study was co-authored by Mark Lemley, one of the most widely read legal scholars on AI and copyright, and a longtime supporter of the idea that AI training is fair use. In fact, Lemley was part of Meta's defense team for its case, but he quit earlier this year, criticizing in a LinkedIn post about 'Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness.' (Meta did not respond to my question about this post.) Lemley was surprised by the results of the study, and told me that it 'complicates the legal landscape in various ways for the defendants' in AI copyright cases. 'I think it ought still to be a fair use,' he told me, referring to training, but we can't entirely accept 'the story that the defendants have been telling' about LLMs. For some models trained using copyrighted books, he told me, 'you could make an argument that the model itself has a copy of some of these books in it,' and AI companies will need to explain to the courts how that copy is also fair use, in addition to the copies made in the course of researching and training their model. As more is learned about how LLMs memorize their training text, we could see more lawsuits from authors whose books, with the right prompting, can be fully reproduced by LLMs. Recent research shows that widely read authors, including J. K. Rowling, George R. R. Martin, and Dan Brown may be in this category. Unfortunately, this kind of research is expensive and requires expertise that is rare outside of AI companies. And the tech industry has little incentive to support or publish such studies. The two recent rulings are best viewed as first steps toward a more nuanced conversation about what responsible AI development could look like. The purpose of copyright is not simply to reward authors for writing but to create a culture that produces important works of art, literature, and research. AI companies claim that their software is creative, but AI can only remix the work it's been trained with. Nothing in its architecture makes it capable of doing anything more. At best, it summarizes. Some writers and artists have used generative AI to interesting effect, but such experiments arguably have been insignificant next to the torrent of slop that is already drowning out human voices on the internet. There is even evidence that AI can make us less creative; it may therefore prevent the kinds of thinking needed for cultural progress. The goal of fair use is to balance a system of incentives so that the kind of work our culture needs is rewarded. A world in which AI training is broadly fair use is likely a culture with less human writing in it. Whether that is the kind of culture we should have is a fundamental question the judges in the other AI cases may need to confront.
Yahoo
25 minutes ago
- Business
- Yahoo
Tesla May Fund Musk's $113B AI Startup That's Burning $13B a Year--Shareholder Vote Ahead
Tesla (NASDAQ:TSLA) may soon funnel investor capital into xAI, Elon Musk's fast-growingbut cash-hungryAI startup. Musk announced plans to put the idea to a shareholder vote, following SpaceX's recent $2 billion investment into the company. That decision comes as xAI ramps up spending, reportedly burning through nearly $1 billion a month as it tries to build large language models to rival OpenAI and Anthropic. The firm was last valued at $113 billion in an all-stock deal that merged it with Musk's social platform, X. While Musk has ruled out a formal merger between Tesla and xAI, he wants shareholders to get in on the potential upsidedespite mounting questions about the risks. Warning! GuruFocus has detected 1 Warning Sign with STU:NC4. Those risks aren't theoretical. Grok, xAI's flagship chatbot now being integrated into Tesla vehicles, recently went off the rails with posts referencing Adolf Hitler and the Holocaust. The incident triggered a public apology and raised fresh concerns about brand damageespecially for a carmaker already under pressure. Tesla's stock is down roughly 22% year-to-date amid softening EV demand and political backlash tied to Musk's online persona. Critics argue that pouring more Tesla cash into a high-burn-rate AI venture could further dilute its core business. Nancy Tengler, CEO of Laffer Tengler Investments, put it plainly: It's like tapping Tesla's trillion-dollar valuation to fund a side bet. Still, history may be on Musk's side. Tesla shareholders backed the controversial SolarCity acquisition in 2016, and many remain fiercely loyal to his vision. Morningstar's Seth Goldstein noted that if Musk can tie xAI's capabilities to Tesla's long-term AI strategy, the board may view it as a forward-looking investment rather than a distraction. xAI already purchases Tesla's Megapacks and could leverage Starlink for global AI distributionindicating tighter links between Musk's companies. The real question for shareholders now is whether this cross-pollination unlocks long-term valueor just stretches Tesla thinner in the short term. This article first appeared on GuruFocus.


The Verge
30 minutes ago
- Business
- The Verge
Anthropic's Claude chatbot can now make and edit your Canva designs
Canva users can now create, edit, and manage their designs by describing their requirements to Anthropic's Claude AI. The connection is the latest of several integrations that allow Claude users to access third-party tools and services, including Figma, Notion, Stripe, and Prisma, without having to leave their conversation with the AI chatbot. Starting today, Claude users will be able to use natural language prompts to complete design tasks in their linked Canva account, such as creating presentations, resizing images, and automatically filling premade templates. The integration also enables users to search for keywords within Canva Docs, Presentations, and brand templates, and summarize them through the Claude AI interface. The feature requires both a paid Canva account (which starts at $15 per month) and a paid Claude account ($17 per month). Anthropic is utilizing the Model Context Protocol (MCP) server that Canva launched last month, which provides Claude with secure access to user Canva content. MCP, often referred to as the 'USB-C port of AI' apps, is an open-source standard that enables developers to quickly connect their AI models with other apps and services. Companies like Anthropic, Microsoft, Figma, and Canva have embraced MCP to prepare their platforms for a future tech landscape that's expected to be filled with AI agents. 'Instead of uploading or manually transferring ideas, users can now generate, summarize, review, and publish Canva designs, all within a Claude chat,' Canva Ecosystem head Anwar Haneef said in a statement to The Verge. 'MCP makes this possible with a simple toggle in settings, marking a powerful shift toward user-friendly, AI-first workflows that combine creativity and productivity in one.' Claude is the first AI assistant to support Canva design workflows through MCP, but the chatbot has other design platform offerings thanks to a similar partnership with Figma that was announced last month. A new Claude integrations directory is also launching on web and desktop today, which should give users an easy overview of all the tools and connected apps at their disposal.
Yahoo
an hour ago
- Business
- Yahoo
Meta CEO Mark Zuckerberg touts AI build-out, says company will spend hundreds of billions on data centers
Meta (META) CEO Mark Zuckerberg announced on Monday that the company plans to build several massive data centers across the US, including one it expects to come online next year. The news follows a slew of pricey, high-profile AI hires at Meta. "Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher," Zuckerberg wrote in a post on Threads. Last month, the social media giant invested $14.3 billion in Scale AI ( and brought on its CEO, Alexandr Wang, and hired former GitHub CEO Nat Friedman and Safe Superintelligence CEO Daniel Gross. The company also poached Apple's head of AI foundation models, Ruoming Pang, according to Bloomberg. The hires are part of the company's effort to develop superintelligence, or high-powered AI capabilities that are beyond human intelligence. To do that, Meta has launched its own Superintelligence Lab, pointing to the technology's importance for the company. To power all of this, Meta said it is investing hundreds of billions of dollars into the hardware needed to research and run those kinds of AI capabilities. The company said it will bring its 1-gigawatt supercluster data center, called Prometheus, online in 2026. The company is also building its Hyperion data center, which it says will scale up to 5 gigawatts over the next several years. A gigawatt is 1 billion watts of electricity, enough to power as many as 800,000 homes for a full year. Meta said Prometheus will be located in New Albany, Ohio, and refers to it as a multiregion data center. Hyperion will be built in Louisiana. That facility, the company explained, will be large enough to cover a large chunk of Manhattan. Meta is in a race to develop the next generation of AI capabilities, battling rivals including Microsoft-backed OpenAI ( Amazon-backed Anthropic ( Google (GOOG, GOOGL), Perplexity ( and others. The company has faced headwinds more recently as it was forced to delay its latest AI model, Llama 4 Behemoth. Part of the effort is related to Meta's desire to break free of its reliance on third-party companies to power its business. Meta's Facebook, Instagram, WhatsApp, and Threads apps are all subject to Apple's and Google's respective app store regulations. At times, that has limited Meta's own business plans and expansion efforts. It's also why the company made its initial push into the metaverse and why it's investing heavily in its augmented reality glasses. But so far, the metaverse has been far less successful than Meta's social apps. Its headsets have sold well but aren't nearly as ubiquitous as smartphones. And while the Meta Quest AR/VR headset is a standalone platform, the company's Ray-Ban Meta smart glasses still require users to connect to a smartphone. AI technologies, however, could give Meta the control over its own destiny that it has sought for so long. But getting there will cost billions, and there's no guarantee it will ever pay off. Email Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley. Sign in to access your portfolio


Bloomberg
an hour ago
- Business
- Bloomberg
Allegra Stratton: Hot Under the White Collar As AI Cuts New Jobs
Who would have thought that, on the morning of the Ladies' Wimbledon final, it would be two former line judges speaking for the millions, maybe even billions, of workers worldwide who are worried about their jobs. Talking to The Telegraph about their artful line judgments being replaced by a flat AI yelp, they echoed the warnings from big tech bosses about coming artificial intelligence-induced job losses. In May, Anthropic's Dario Amodei said he believed that AI will lead to a white collar jobs apocalypse and told Axios that this could mean unemployment of up to 20%. A fortnight ago Ford's CEO went even further – according to him, half of all white collar jobs could be knocked out by AI.