logo
Global trade war may produce headwinds for nascent AI sector, IEA says

Global trade war may produce headwinds for nascent AI sector, IEA says

Reuters10-04-2025

PARIS, April 10 (Reuters) - An escalating global tariff war could provide challenges for the emerging data centre sector and cause slower growth, Laura Cozzi, the International Energy Agency (IEA) Director of Technology told Reuters.
The U.S., China and the European Union together are set to account by 2030 for 80% of the forecast growth in data centre demand growth, which is expected to be dominated by Artificial Intelligence (AI) use, the IEA said in a report on Thursday.
The report's headwind scenario "encompasses many of the things we are seeing - slower economic growth, more tariffs in more countries, so indeed yes (the current tariff environment) is a scenario in which AI would see a slower growth than what we see in our base case," Cozzi said.
Global electricity consumption from data centres is expected to rise to around 945 terawatt hours (TWh) by 2030 in the IEA's base case scenario, but the "headwind scenario" would see that drop to 670 TWh, IEA data showed.
In the United States, data centres are expected to account for nearly half of electricity demand growth between now and 2030, and the country is expected to lead in data centre development globally, according to IEA data.
U.S. electricity utilities have been fielding massive requests for new capacity that would exceed their peak demand or existing generation capacity, raising concerns that tech companies are approaching multiple power utility providers, inflating the demand outlook.
The report aims to work with tech companies and industry to make sense of the real queue for data centres, which is ultimately going to be essential for AI to get the electricity it needs, Cozzi said.
Strain on grids could also lead to project delays, with about 20% of planned data centre projects at risk. Demand for transmission lines and critical grid and generation equipment are in high demand, reflecting this risk, the IEA report said.
Some 50% of data centres under development in the United States are in pre-existing large clusters, potentially raising risks of local bottlenecks, it said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Facebook groups hit with 'mass suspensions' after Meta technical error
Facebook groups hit with 'mass suspensions' after Meta technical error

Daily Mirror

time2 hours ago

  • Daily Mirror

Facebook groups hit with 'mass suspensions' after Meta technical error

Meta is warning users that it has suspended thousands of Facebook Groups due to a technical error. The company says its working to fix the issue but has not shared what's causing the widespread suspensions Meta is facing global outrage after a spate of mass bans has swept through Instagram and Facebook, now hitting Facebook Groups hard with scores of users barred from one of the social media platform's key features. TechCrunch reports that thousands of groups around the world have been suspended, sparking outrage and coordinated efforts on other platforms like Reddit to exchange information. ‌ Meta's spokesperson, Andy Stone, acknowledged the problem, confirming that the tech giant was on the case. "We're aware of a technical error that impacted some Facebook Groups. We're fixing things now," he stated in an email. ‌ The cause behind the widespread bans remains a mystery, but speculation points towards a glitch in AI moderation systems. Affected users have shared that many of the banned groups were unlikely targets for moderation, focusing on harmless topics such as money-saving tips, parenting advice, pet ownership, gaming, Pokémon, and mechanical keyboard aficionados. Admins of the Facebook Groups have been left baffled by ambiguous warnings citing violations for "terrorism-related" content or nudity, which they vehemently deny ever posting, reports the Express. The scale of the issue is significant, with both small and large groups affected, some boasting memberships ranging from tens to hundreds of thousands, and even reaching into the millions. Users caught up in a recent Facebook group ban wave are being advised by their peers to hold off on appealing the suspension, hoping it will be lifted automatically once a bug is fixed. Reddit's Facebook community (r/facebook) is currently awash with posts from frustrated group admins and members upset over the sudden removal of their groups. Reports are flooding in about entire groups being taken down in one fell swoop, with some users expressing disbelief at the reasons given for the bans, such as a bird photography group with nearly a million followers being flagged for nudity. ‌ Some users insist their groups were diligently moderated against spam, citing examples like a family-friendly Pokémon group with close to 200,000 members that was accused of referencing "dangerous organisations," or an interior design group with millions of members receiving the same charge. A few Facebook Group admins who have invested in Meta's Verified subscription, which promises priority customer support, have managed to receive assistance. However, others have shared that their groups faced suspension or complete deletion without resolution. ‌ The connection between this issue and the broader pattern of bans affecting individual Meta users remains uncertain, but it appears to be part of a larger problem plaguing social networks. Alongside Facebook and Instagram, social networks such as Pinterest and Tumblr have also been hit with complaints about mass suspensions in recent weeks. This has led users to suspect that AI-automated moderation efforts are the culprits. Pinterest at least owned up to its blunder, stating that the mass bans were due to an internal error, but it denied that AI was the problem. Tumblr stated its issues were linked to tests of a new content-filtering system but did not specify whether that system involved AI. When questioned last week about the Instagram bans, Meta declined to comment. Users are now rallying behind a petition that has already collected more than 12,380 signatures, urging Meta to tackle the issue. Others, including those whose businesses were impacted, are seeking legal recourse. Meta has yet to reveal what's causing the issue with either individual accounts or groups.

Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'
Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'

NBC News

time3 hours ago

  • NBC News

Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'

Chris Schwegmann is getting creative with how artificial intelligence is being used in law. At Dallas-based boutique law firm Lynn Pinker Hurst & Schwegmann, he sometimes asks AI to channel Supreme Court Chief Justice John Roberts or Sherlock Holmes. Schwegmann said after uploading opposing counsel's briefs, he'll ask legal technology platform Harvey to assume the role of a legal mind like Roberts to see how the chief justice would think about a particular problem. Other times, he will turn to a fictional character like Holmes, unlocking a different frame of mind. 'Harvey, ChatGPT ... they know who those folks are, and can approach the problem from that mindset,' he said. 'Once we as lawyers get outside those lanes, when we are thinking more creatively involving other branches of science, literature, history, mythology, that sometimes generates some of the most interesting ideas that can then be put, using proper legal judgement, in a framework that works to solve a legal problem.' It's just one example of how smaller businesses are putting AI to work to punch above their weight, and new data shows there's an opportunity for much more implementation in the future. Only 24% of owners in the recent Small Business and Technology Survey from the National Federation of Independent Business said they are using AI, including ChatGPT, Canva and Copilot, in some capacity. Notably, 98% of those using it said AI has so far not impacted the number of employees at their firms. At his trial litigation firm of 50 attorneys, Schwegmann said AI is resolving work in days that would sometimes take weeks, and said the technology isn't replacing workers at the firm. It has freed up associate lawyers from doing 'grunt work,' he said, and also means more senior-level partners have the time to mentor younger attorneys because everyone has more time. The NFIB survey found AI use varied based on the size of the small business. For firms with employees in the single digits, uptake was at 21%. At firms with fifty or more workers, AI implementation was at nearly half of all respondents. 'The data show clearly that uptake for the smallest businesses lags substantially behind their larger competitors. ... With a little attention from all the relevant stakeholders, a more equal playing field is possible,' the NFIB report said. For future AI use, 63% of all small employers surveyed said the utilization of the technology in their industry in the next five years will be important to some degree; 12% said it will be extremely important and 15% said it will not be important at all. Some of the most common uses in the survey were for communications, marketing and advertising, predictive analysis and customer service. 'We still have the need for the independent legal judgment of our associate lawyers and our partners — it hasn't replaced them, it just augments their thinking,' Schwegmann said. 'It makes them more creative and frees their time to do what lawyers do best, which is strategic thought and creative problem solving.' The NFIB data echoes a recent survey from Reimagine Main Street, a project of Public Private Strategies Institute in partnership with PayPal. Reimagine surveyed nearly 1,000 small businesses with annual revenue between $25,000 and $50,000 and also found that a quarter had already started integrating AI into daily workflows. Schwegmann said at his firm, AI is helping to even the playing field. 'One of the things Harvey lets us do is review, understand and incorporate and respond much faster than we would prior to the use of these kinds of AI tools,' he said. 'No longer does a party have an advantage because they can paper you to death.'

Federal judge rules copyrighted books are fair use for AI training
Federal judge rules copyrighted books are fair use for AI training

NBC News

time3 hours ago

  • NBC News

Federal judge rules copyrighted books are fair use for AI training

A federal judge has sided with Anthropic in a major copyright ruling, declaring that artificial intelligence developers can train on published books without authors' consent. The decision, filed Monday in the U.S. District Court for the Northern District of California, sets a precedent that training AI systems on copyrighted works constitutes fair use. Though it doesn't guarantee other courts will follow, Judge William Alsup's ruling marks the first of dozens of ongoing copyright lawsuits to give an answer on fair use in the context of generative AI. It's a question that's been raised by creatives across various industries for years since generative AI tools exploded into the mainstream, allowing users to easily produce art from models trained on copyrighted work — often without the human creator's knowledge or permission. AI companies have been hit with a slew of copyright lawsuits from media companies, music labels and authors since 2023. Artists have signed multiple open letters urging government officials and AI developers to constrain the unauthorized use of copyrighted works. In recent years, companies have also increasingly inked licensing deals with AI developers to dictate terms of use for their artists' works. Alsup on Monday ruled on a lawsuit filed by three authors — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — last August, who claimed that Anthropic ignored copyright protections when it pirated millions of books and digitized purchased books to feed into its large language models, which helped train them to generate human-like text responses. 'The copies used to train specific LLMs were justified as a fair use,' Alsup wrote in the ruling. 'Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.' His decision stated that Anthropic's use of the books to train its models, including versions of its flagship AI model Claude, was 'exceedingly transformative' enough to fall under fair use. Fair use, as defined by the Copyright Act, takes into account four factors: the purpose of the use, what kind of copyrighted work is used (creative works get stronger protection than factual works), how much of the work was used, and whether the use hurts the market value of the original work. 'We are pleased that the Court recognized that using 'works to train LLMs was transformative — spectacularly so,'' Anthropic said in a statement, quoting the ruling. 'Consistent with copyright's purpose in enabling creativity and fostering scientific progress, 'Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.'' Bartz and Johnson did not immediately respond to requests for comment. Graeber declined to comment. Alsup noted, however, that all of the authors' works contained 'expressive elements' earning them stronger copyright protection, which is a factor that points against fair use, although not enough to sway the overall ruling. He also added that while making digital copies of purchased books was fair use, downloading pirated copies for free did not constitute fair use. But aside from the millions of pirated copies, Alsup wrote, copying entire works to train AI models was 'especially reasonable' because the models didn't reproduce those copies for public access, and doing so 'did not and will not displace demand' for the original books. His ruling stated that although AI developers can legally train AI models on copyrighted works without permission, they should obtain those works through legitimate means that don't involve pirating or other forms of theft. Despite siding with the AI company on fair use, Alsup wrote that Anthropic will still face trial for the pirated copies it used to create its massive central library of books used to train AI. 'That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft,' Alsup wrote, 'but it may affect the extent of statutory damages.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store