logo
At Amazon's biggest data centre, everything is supersized for AI

At Amazon's biggest data centre, everything is supersized for AI

By Karen Weise & Cade Metz
A year ago, a 1,200-acre stretch of farmland outside New Carlisle, Ind., was an empty cornfield. Now, seven Amazon data centres rise up from the rich soil, each larger than a football stadium.
Over the next several years, Amazon plans to build around 30 data centres at the site, packed with hundreds of thousands of specialised computer chips.
With hundreds of thousands of miles of fibre connecting every chip and computer together, the entire complex will form one giant machine intended just for artificial intelligence. The facility will consume 2.2 gigawatts of electricity — enough to power a million homes. Each year, it will use millions of gallons of water to keep the chips from overheating. And it was built with a single customer in mind: the AI startup Anthropic, which aims to create an AI system that matches the human brain.
The complex — so large that it can be viewed completely only from high in the sky — is the first in a new generation of data centres being built by Amazon, and part of what the company calls Project Rainier, after the mountain that looms near its Seattle headquarters. Project Rainier is Amazon's entry into a race by the technology industry to build data centres so large they would have been considered absurd just a few years ago.
The data centres will dwarf most of today's, which were built before OpenAI's ChatGPT chatbot inspired the AI boom in 2022.
The tech industry's increasingly powerful AI technologies require massive networks of specialised computer chips — and hundreds of billions of dollars to build the data centres that house those chips. The result: behemoths that stretch the limits of the electrical grid and change the way the world thinks about computers.
Amazon, which has invested $8 billion in Anthropic, will rent computing power from the new facility to its startup partner. An Anthropic cofounder, Tom Brown, who oversees the company's work with Amazon on its chips and data centres, said having all that computing power in one spot could allow the startup to train a single AI system.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Judge dismisses authors copyright lawsuit against Meta over AI training
Judge dismisses authors copyright lawsuit against Meta over AI training

Mint

time2 hours ago

  • Mint

Judge dismisses authors copyright lawsuit against Meta over AI training

A federal judge on Wednesday sided with Facebook parent Meta Platforms in dismissing a copyright infringement lawsuit from a group of authors who accused the company of stealing their works to train its artificial intelligence technology. The ruling from U.S. District Judge Vince Chhabri was the second in a week from San Francisco's federal court to dismiss major copyright claims from book authors against the rapidly developing AI industry. Chhabri found that 13 authors who sued Meta 'made the wrong arguments' and tossed the case. But the judge also said that the ruling is limited to the authors in the case and does not mean that Meta's use of copyrighted materials is lawful. Lawyers for the plaintiffs — a group of well-known writers that includes comedian Sarah Silverman and authors Jacqueline Woodson and Ta-Nehisi Coates — didn't immediately respond to a request for comment Wednesday. Meta also didn't immediately respond to a request for comment. 'This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful,' Chhabri wrote. 'It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.' On Monday, from the same courthouse, U.S. District Judge William Alsup ruled that AI company Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books, but the company must still go to trial for illicitly acquiring those books from pirate websites instead of buying them. But the actual process of an AI system distilling from thousands of written works to be able to produce its own passages of text qualified as 'fair use' under U.S. copyright law because it was 'quintessentially transformative,' Alsup wrote. Chhabria, in his Meta ruling, criticized Alsup's reasoning on the Anthropic case, arguing that 'Alsup focused heavily on the transformative nature of generative AI while brushing aside concerns about the harm it can inflict on the market for the works it gets trained on.' Chhabria suggested that a case for such harm can be made. In the Meta case, the authors had argued in court filings that Meta is 'liable for massive copyright infringement' by taking their books from online repositories of pirated works and feeding them into Meta's flagship generative AI system Llama. Lengthy and distinctively written passages of text — such as those found in books — are highly useful for teaching generative AI chatbots the patterns of human language. 'Meta could and should have paid' to buy and license those literary works, the authors' attorneys argued. Meta countered in court filings that U.S. copyright law 'allows the unauthorized copying of a work to transform it into something new' and that the new, AI-generated expression that comes out of its chatbots is fundamentally different from the books it was trained on. "After nearly two years of litigation, there still is no evidence that anyone has ever used Llama as a substitute for reading Plaintiffs' books, or that they even could,' Meta's attorneys argued. Meta says Llama won't output the actual works it has copied, even when asked to do so. 'No one can use Llama to read Sarah Silverman's description of her childhood, or Junot Diaz's story of a Dominican boy growing up in New Jersey,' its attorneys wrote. Accused of pulling those books from online 'shadow libraries," Meta has also argued that the methods it used have 'no bearing on the nature and purpose of its use' and it would have been the same result if the company instead struck a deal with real libraries. Such deals are how Google built its online Google Books repository of more than 20 million books, though it also fought a decade of legal challenges before the U.S. Supreme Court in 2016 let stand lower court rulings that rejected copyright infringement claims. The authors' case against Meta forced CEO Mark Zuckerberg to be deposed, and has disclosed internal conversations at the company over the ethics of tapping into pirated databases that have long attracted scrutiny. 'Authorities regularly shut down their domains and even prosecute the perpetrators,' the authors' attorneys argued in a court filing. "That Meta knew taking copyrighted works from pirated databases could expose the company to enormous risk is beyond dispute: it triggered an escalation to Mark Zuckerberg and other Meta executives for approval. Their gamble should not pay off.' "Whatever the merits of generative artificial intelligence, or GenAI, stealing copyrighted works off the Internet for one's own benefit has always been unlawful,' they argued. The named plaintiffs are Jacqueline Woodson, Richard Kadrey, Andrew Sean Greer, Rachel Louise Snyder, David Henry Hwang, Ta-Nehisi Coates, Laura Lippman, Matthew Klam, Junot Diaz, Sarah Silverman, Lysa TerKeurst, Christopher Golden and Christopher Farnsworth. Most of the plaintiffs had asked Chhabria to rule now, rather than wait for a jury trial, on the basic claim of whether Meta infringed on their copyrights. Two of the plaintiffs, Ta-Nehisi Coates and Christopher Golden, did not seek such summary judgment. Chhabri said in the ruling that while he had 'no choice' but to grant Meta's summary judgment tossing the case, 'in the grand scheme of things, the consequences of this ruling are limited. This is not a class action, so the ruling only affects the rights of these 13 authors -- not the countless others whose works Meta used to train its models.'

Must draw a hard line: US lawmakers want AI systems from these countries to be ‘banned' from government agencies
Must draw a hard line: US lawmakers want AI systems from these countries to be ‘banned' from government agencies

Time of India

time4 hours ago

  • Time of India

Must draw a hard line: US lawmakers want AI systems from these countries to be ‘banned' from government agencies

Representative Image A bipartisan group of US lawmakers have proposed a bill to ban AI systems from China, Russia, Iran and North Korea from federal executive agencies, a report claims. According to a report by the news agency Reuters, the legislation wants to create a permanent framework to prohibit the use of these foreign-developed AI models by US government agencies. The recently introduced bill by Representative John Moolenaar (Republican-Michigan) and Representative Raja Krishnamoorthi (Democrat-Illinois) would require the Federal Acquisition Security Council to establish and regularly update a list of AI models developed in these countries. What US lawmakers said about this bill In a statement to Reuters, Moolenaar said: 'The US must draw a hard line: hostile AI systems have no business operating inside our government. This legislation creates a permanent firewall to keep adversary AI out of our most sensitive networks, where the cost of compromise is simply too high.' Under the proposed law, federal agencies would be barred from purchasing or using these AI technologies unless granted a specific exemption, such as for research purposes, from the US Congress or the Office of Management and Budget, Reuters reported. The bill also includes a provision allowing technologies to be removed from the banned list if proof is provided that they are not controlled or influenced by a foreign adversary. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo This comes after a previous Reuters report cited a senior US official who claimed that DeepSeek is helping China's military and intelligence operations, and the Chinese AI company has had access to "large volumes" of Nvidia's chips. In January, DeepSeek came into the spotlight by announcing it had developed an AI model similar in capability to OpenAI's ChatGPT but at a lower cost. Since then, several US companies and government agencies have restricted its use due to data security concerns, and the Trump administration is reportedly considering a ban on its use on federal devices. The bill was co-sponsored by US Representative Ritchie Torres (Democrat-New York) and Representative Darin LaHood (Republican-Illinois). In the US Senate, it is being led by Senators Rick Scott (Republican-Florida) and Gary Peters (Democrat-Michigan), the Reuters report added. Redmi Pad 2: Know these Things Before Buying! AI Masterclass for Students. Upskill Young Ones Today!– Join Now

At Amazon's biggest data centre, everything is supersized for AI
At Amazon's biggest data centre, everything is supersized for AI

Business Standard

time6 hours ago

  • Business Standard

At Amazon's biggest data centre, everything is supersized for AI

By Karen Weise & Cade Metz A year ago, a 1,200-acre stretch of farmland outside New Carlisle, Ind., was an empty cornfield. Now, seven Amazon data centres rise up from the rich soil, each larger than a football stadium. Over the next several years, Amazon plans to build around 30 data centres at the site, packed with hundreds of thousands of specialised computer chips. With hundreds of thousands of miles of fibre connecting every chip and computer together, the entire complex will form one giant machine intended just for artificial intelligence. The facility will consume 2.2 gigawatts of electricity — enough to power a million homes. Each year, it will use millions of gallons of water to keep the chips from overheating. And it was built with a single customer in mind: the AI startup Anthropic, which aims to create an AI system that matches the human brain. The complex — so large that it can be viewed completely only from high in the sky — is the first in a new generation of data centres being built by Amazon, and part of what the company calls Project Rainier, after the mountain that looms near its Seattle headquarters. Project Rainier is Amazon's entry into a race by the technology industry to build data centres so large they would have been considered absurd just a few years ago. The data centres will dwarf most of today's, which were built before OpenAI's ChatGPT chatbot inspired the AI boom in 2022. The tech industry's increasingly powerful AI technologies require massive networks of specialised computer chips — and hundreds of billions of dollars to build the data centres that house those chips. The result: behemoths that stretch the limits of the electrical grid and change the way the world thinks about computers. Amazon, which has invested $8 billion in Anthropic, will rent computing power from the new facility to its startup partner. An Anthropic cofounder, Tom Brown, who oversees the company's work with Amazon on its chips and data centres, said having all that computing power in one spot could allow the startup to train a single AI system.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store