
Meta Inks More Clean Energy Deals with Developer Invenergy
Meta Platforms Inc. signed four contracts with closely held renewable energy developer Invenergy for wind and solar power as part of the Facebook-owner's push to secure clean sources of electricity for its operations and data centers.
The agreements involve projects in three US states with a total capacity of 791 megawatts, the companies said in a statement. The electricity will supply local power grids while Meta will get the clean energy credits tied to the new generation.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Entrepreneur
20 minutes ago
- Entrepreneur
Meta Wins AI Copyright Case Over Sarah Silverman, Junot Diaz
A district judge sided with tech giant Meta on Wednesday in a major copyright infringement case, Richard Kadrey, et al. v. Meta Platforms Inc. It marks the second time this week that tech companies have scored major legal victories over AI copyright disputes against individuals. In the case, 13 authors, including Richard Kadrey, Sarah Silverman, Junot Diaz, and Ta-Nehisi Coates, argued that Meta violated copyright laws by training its AI models on their copyrighted works without their permission. They provided exhibits showing that Meta's Llama AI model could thoroughly summarize their books when prompted to do so, indicating that the AI had ingested their work in training. The case was filed in July 2023. During the discovery phase, it was uncovered that Meta had used 7.5 million pirated books and 81 million research papers to train its AI model. On Wednesday, U.S. District Judge Vince Chhabria of San Francisco ruled in a 40-page decision that Meta's use of books to train its AI model was protected under the fair use doctrine in U.S. copyright law. The fair use doctrine permits the use of copyrighted material without obtaining permission from the copyright holder in certain cases. What qualifies as fair use depends on factors like how different the end work is from the original and whether the use harms the existing or future market for the copyrighted work. Related: 'Bottomless Pit of Plagiarism': Disney, Universal File the First Major Hollywood Lawsuit Against an AI Startup Chhabria said that while it "is generally illegal to copy protected works without permission," the plaintiffs failed in this case to show that Meta's use of copyrighted material caused "market harm." They didn't show, for instance, that Meta's AI spits out excerpts of books verbatim, creates AI copycat books, or prevents the authors from getting AI licensing deals. "Meta has defeated the plaintiffs' half-hearted argument that its copying causes or threatens significant market harm," Chhabria stated in the ruling. Furthermore, Meta's purpose of copying books "for a transformative purpose" is protected under the fair use doctrine, the judge ruled. Earlier this week, a different judge came to the same conclusion in the class action case Bartz v. Anthropic. U.S. District Judge William Alsup of San Francisco stated in a ruling filed on Monday that $61.5 billion AI startup Anthropic was allowed to train its AI model on copyrighted books under the fair use doctrine because the end product was "exceedingly transformative." Related: 'Extraordinarily Expensive': Getty Images Is Pouring Millions of Dollars Into One AI Lawsuit, CEO Says Anthropic trained its AI on books not to duplicate them or replace them, but to "create something different" in the form of AI answers, Alsup wrote. The ruling marked the first time that a federal judge has sided with tech companies over creatives in an AI copyright lawsuit. Now Chhabria's decision marks the second time that tech companies have triumphed in court against individuals in copyright cases. The judge noted that the ruling does not mean that "Meta's use of copyrighted materials to train its language models is lawful," but only means that "these plaintiffs made the wrong arguments" and that Meta's arguments won in this case. "We appreciate today's decision from the Court," a Meta spokesperson said in a statement on Wednesday, per CNBC. "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." Other AI copyright cases are making their way through the courts, including one filed by authors Kai Bird, Jia Tolentino, Daniel Okrent, and several others against Microsoft earlier this week. The lawsuit, filed in New York federal court on Tuesday, alleges that Microsoft violated copyright by training AI on the authors' work.


Forbes
31 minutes ago
- Forbes
On Disinformation, Is Big Tech Ready For The Digital Services Act?
Brussels, Belgium - 21 May, 2022: European Union flag in front of the Berlaymont building, ... More headquarters of European Commission. With the EU's Digital Services Act set to come into force next week, big tech firms are failing to fulfil their obligations when it comes to disinformation, a new report has claimed. According to the European Digital Media Observatory, there's a 'clear gap' between the platforms' commitments under the Code of Practice on Disinformation - set to be integrated into the DSA - and their actual implementation. The research is based on an evaluation of transparency reports submitted by Meta, Google, Microsoft and TikTok last year, with independent verification by EDMO researchers and qualitative insights from a survey with experts. And, said the EDMO, the companies have so far made very limited efforts. "In every field, most [Very Large Online Platforms and Search Engines]"Even when formal agreements exist, their implementation often falls short of expectations. As a result, current efforts rarely translate into long-term, systemic support for counter-disinformation strategies." In terms of the companies' commitment to media literacy and content labeling, Meta's initiatives - such as We Think Digital and in-app prompts - lack transparency in terms of their geographical scope, and don't provide substantive data on user engagement or measurable outcomes at the national level, the researchers said. Microsoft, meanwhile, partners with services like NewsGuard, but can't give much evidence of reach or effectiveness. There are no user engagement figures, no reported outcomes, and no indication of the actual scale of these efforts. Google, meanwhile, has prebunking initiatives and features such as 'More About This Page.' "However, these efforts remain largely unaccountable, as Google provides no concrete data on user reach or effectiveness," the researchers said. "While the initiatives appear well-designed in theory, the lack of transparency around their actual performance makes it impossible to assess their real-world impact." TikTok's doing a bit better, the researchers found, with a broader range of national campaigns and fact-checking partnerships. However, it's still failing to provide country-specific detail or consistent engagement data. Governance for sensitive data access, said the EDMO, is a weak point for all the platforms. Meta, Microsoft and Google reference pilot programs, but don't provide any substantive public documentation on governance frameworks or outcomes. Meanwhile, while TikTok is taking part in a EDMO data access pilot, it doesn't give any conclusive evidence as to the effectiveness or transparency of these governance efforts. There are varying degrees of cooperation with fact-checkers, another commitment set to come into force with the DSA. While Meta lists quite a number of activities and partnerships, it doesn't provide any systematic evaluation of their impact, while Microsoft provides only "minimal and vague" references to cooperation. By contrast, Google and TikTok get the thumbs up for their well-integrated processes, though again they don't provide a great deal of information. "Although platforms like Google and TikTok demonstrate more structured approaches in certain areas, none provide full transparency, independent verification, or robust impact reporting," the researchers said. "Meta's efforts are undermined by poor disclosure and the absence of meaningful impact data. While Microsoft's performance is particularly weak across all commitments, this result should be considered in connection with the specific risk-profile of its services." Meta, Google, Microsoft and TikTok have been approached for comment.


UPI
an hour ago
- UPI
Meta gets partial win in AI-teaching copyright case
June 26 (UPI) -- A federal judge let Meta off the hook for the use of books to train its artificial intelligence model, but it still might face legal challenges by the authors in other ways. United States District Judge Vince Chhabria ruled Wednesday that it wasn't unlawful for Meta to feed copyrighted material to its large language models, or "Llama," because of the doctrine of "fair use." Chhabria wrote in his ruling that the use of copyrighted works by companies to train generative AI models, are using the materials in a creative fashion, which in copyright law is considered a "transformative" use as the Copyright Act says the use of copyrighted materials to teach, research, criticize, comment or report news is not infringement. Conversely, the judge also noted that creating a new product by way of copying a protected work and then marketing that product could potentially harm the markets that carry the original materials. "So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works," Chhabria wrote. He then questioned if companies that feed copyright-protected materials into their AI platforms without first getting permission from the copyright holders or paying to use the originals are doing something illegal. "Although the devil is in the details, in most cases the answer will likely be yes," wrote the judge. Chhabria then referred to a similar case that concluded on Monday in which U.S. District Judge William Alsup ruled the Anthropic artificial intelligence company didn't violate any copyright laws when it used millions of copyrighted books to train its own AI. "Judge Alsup focused heavily on the transformative nature of generative AI while brushing aside concerns about the harm it can inflict on the market for the works it gets trained on," Chhabria wrote. "No matter how transformative [Llama] training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars," Chhabria wrote. "While enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books." However, part of the lawsuit filed by the authors also alleges that Meta had removed copyright management information, which would violate the Digital Millennium Copyright Act, or DMCA, and this will be overseen separately. A case management conference on how the court will proceed in regard to the DMCA is scheduled for July 11.