logo
The Times and Amazon announce an AI licensing deal

The Times and Amazon announce an AI licensing deal

Time of Indiaa day ago

The New York Times Co. has agreed to license its editorial content to Amazon for use in the tech giant's artificial intelligence platforms, the company said Thursday.
The multiyear agreement "will bring
Times editorial content
to a variety of Amazon customer experiences," the news organization said in a statement. Besides news articles, the agreement encompasses material from Cooking, the Times' food and recipe site, and The Athletic, which focuses on sports.
This is the Times' first licensing arrangement with a focus on
generative AI technology
.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Hackers Are Targeting You—Block Them With This Quick Fix!
VPN | Search ads
Click Here
Undo
In 2023, the Times sued OpenAI and its partner, Microsoft, for copyright infringement, accusing the tech companies of using millions of articles published by the Times to train automated chatbots without any kind of compensation. OpenAI and Microsoft have rejected those accusations.
Financial terms of the licensing deal with Amazon were not disclosed.
Live Events
"The deal is consistent with our long-held principle that high-quality journalism is worth paying for," Meredith Kopit Levien, the CEO of the Times, said in a note to staff. "It aligns with our deliberate approach to ensuring that our work is valued appropriately, whether through commercial deals or through the enforcement of our intellectual property rights."
Discover the stories of your interest
Blockchain
5 Stories
Cyber-safety
7 Stories
Fintech
9 Stories
E-comm
9 Stories
ML
8 Stories
Edtech
6 Stories
Amazon's use of editorial content from the Times could extend to the Alexa software found on its smart speakers. In some instances, excerpts from Times reporting will include attribution and a link back to the Times' website. Material from the Times will also be used to train Amazon's proprietary AI models, the company said.
Amazon declined to comment beyond what was in the statement issued by the Times.
News organizations have wavered on how to contend with the rapid emergence of AI technology, elements of which have been developed by software programs that ingest the content of millions of online news stories.
Even as the Times pursued litigation against OpenAI, other news outlets -- including Axel Springer, Condé Nast and News Corp -- entered into licensing agreements to receive revenue in exchange for the use of their materials. The Washington Post, which is owned by Amazon's founder, Jeff Bezos, agreed to a deal with OpenAI last month.
Amazon itself has been playing a bit of catch-up in the AI race.
When OpenAI released ChatGPT in late 2022, kicking off the AI boom, Amazon was caught flat-footed, much like Google, Meta and Apple.
Chatbots like ChatGPT are driven by what scientists call neural networks, mathematical systems that can learn skills by analyzing huge amounts of digital data. By pinpointing patterns in vast troves of Wikipedia articles, news stories and chat logs, for instance, these systems can learn to generate humanlike text on their own, including poems, term papers and computer programs.
Like Google, Microsoft and Meta, Amazon had the computing power needed for the task. As the world's largest cloud computing company, it ran a vast network of data centers filled with the specialized computer chips used to train AI systems. But it lacked some of the talent needed to build the most sophisticated systems, and the company had not prioritized the technology to the degree of OpenAI and its partner, Microsoft.
Last June, Amazon inked a deal with a key AI startup called Adept, bringing on many of its employees, including its founder, David Luan. Amazon paid Adept at least $330 million to license its technology, three people with knowledge of the transaction said.
Amazon later inked a similar deal with another important startup called Covariant, bringing on its top talent including its co-founder Pieter Abbeel, a professor of robotics at the University of California, Berkeley, who previously worked for OpenAI.
Luan and Abbeel are now leading an Amazon research lab that aims to build "artificial general intelligence," or AGI, shorthand for a machine that can do anything the human brain can do. This ambitious goal is also the stated aim of OpenAI and other leading research labs, such as Google DeepMind. Data licensed from the Times could help fuel the efforts of Amazon's AGI lab.
In addition to its own AI work, Amazon has established a close partnership with Anthropic, one of OpenAI's chief rivals. Over the past two years, the tech giant has invested $4 billion in the startup.
Amazon's investment in Anthropic is not just a simple equity stake. Much like Microsoft's investment in OpenAI, it gives Amazon access to Anthropic's AI systems -- and commits Amazon computer power to the startup. Amazon also receives an early shot at making Anthropic's AI models available to customers of its cloud computing service.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic hits $3 billion in annualised revenue on business demand for AI
Anthropic hits $3 billion in annualised revenue on business demand for AI

The Hindu

time15 minutes ago

  • The Hindu

Anthropic hits $3 billion in annualised revenue on business demand for AI

Artificial intelligence developer Anthropic is making about $3 billion in annualised revenue, according to two sources familiar with the matter, in an early validation of generative AI use in the business world. The milestone, which projects the company's current sales over the course of a year, is a significant jump from December 2024 when the metric was nearly $1 billion, the sources said. The figure crossed $2 billion around the end of March, and at May's end it hit $3 billion, one of the sources said. While consumers have embraced rival OpenAI's ChatGPT, a number of enterprises have limited their rollouts to experimentation, despite board-level interest in AI. Anthropic's revenue surge, largely from selling AI models as a service to other companies, is a data point showing how business demand is growing, one of the sources said. A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models. This demand is setting Anthropic apart among software-as-a-service vendors. Its single-quarter revenue increases would count Anthropic as the fastest-growing SaaS company that at least one venture capitalist has ever seen. "We've looked at the IPOs of over 200 public software companies, and this growth rate has never happened," said Meritech General Partner Alex Clayton, who is not an Anthropic investor and has no inside knowledge of its sales. He cautioned that these comparisons are not fully precise, since Anthropic also has consumer revenue via subscriptions to its Claude chatbot. Still, by contrast, publicly traded SaaS company Snowflake took six quarters to go from $1 billion to $2 billion in such run-rate revenue, Clayton said. Anthropic competitor OpenAI has projected it will end 2025 with more than $12 billion in total revenue, up from $3.7 billion last year, three people familiar with the matter said. This total revenue is different from an estimated annualized figure like Anthropic's. Reuters could not determine this metric for OpenAI. The two rivals appear to be establishing their own swim lanes. While both offer enterprise and consumer products, OpenAI is shaping up to be a consumer-oriented company, and the majority of its revenue comes from subscriptions to its ChatGPT chatbot, OpenAI Chief Financial Officer Sarah Friar told Bloomberg late last year. OpenAI has not reported enterprise-specific revenue but said in May that paying seats for its ChatGPT enterprise product have grown to 3 million, from 2 million in February, and that T-Mobile and Morgan Stanley are among its enterprise customers. In the consumer race, Anthropic's Claude has seen less adoption than OpenAI. Claude's traffic, a proxy for consumer interest, was about 2% of ChatGPT's in April, according to Web analytics firm Similarweb. Anthropic, founded in 2021 by a team that departed OpenAI over differences in vision, closed a $3.5 billion fundraise earlier this year. That valued the company at $61.4 billion. OpenAI is currently valued at $300 billion.

Google makes case for keeping Chrome browser
Google makes case for keeping Chrome browser

Economic Times

time20 minutes ago

  • Economic Times

Google makes case for keeping Chrome browser

Google on Friday urged a US judge to reject the notion of making it spin off its Chrome browser to weaken its dominance in online search. Rival attorneys made their final arguments before US District Court Judge Amit Mehta, who is considering imposing "remedies" after a landmark decision last year that Google maintained an illegal monopoly in search. US government attorneys have called on Mehta to order Google to divest itself of Chrome browser, contending that artificial intelligence is poised to ramp up the tech giant's dominance as the go-to window into the also want Google barred from agreements with partners such as Apple and Samsung to distribute its search tools, which was the focus of the suit against the Silicon Valley internet weeks of testimony ended early in May, with Friday devoted to rival sides parsing points of law and making their arguments before Mehta in a courtroom in Washington. John Schmidtlein, an attorney for Google, told Mehta that there was no evidence presented showing people would have opted for a different search engine without the exclusivity deals in place. Schmidtlein noted that Verizon installed Chrome on smartphones even though the US telecom titan owned Yahoo! search engine and was not bound by a contract with the 100 or so witnesses heard at trial, not one said "if I had more flexibility, I would have installed Bing" search engine from Microsoft, the Google attorney told the judge. 'More flexibility' Department of Justice (DoJ) attorney David Dahlquist countered that Apple, which was paid billions of dollars to make Chrome the default browser on iPhones, "repeatedly asked for more flexibility" but was denied by contends that the United States has gone way beyond the scope of the suit by recommending a spinoff of Chrome, and holding open the option to force a sale of its Android mobile operating system."Forcing the sale of Chrome or banning default agreements wouldn't foster competition," said Cato Institute senior fellow in technology policy Jennifer Huddleston."It would hobble innovation, hurt smaller players, and leave users with worse products."Google attorney Schmidtlein noted that more than 80 percent of Chrome users are outside the United States, meaning divestiture would have global ramifications."Any divested Chrome would be a shadow of the current Chrome," he contended."And once we are in that world, I don't see how you can say anybody is better off."The potential of Chrome being weakened or spun off comes as rivals such as Microsoft, ChatGPT and Perplexity put generative artificial intelligence (AI) to work fetching information from the internet in response to user online search antitrust suit was filed against Google some five years ago, before ChatGPT made its debut, triggering AI is among the tech companies investing heavily to be a leader in AI, and is weaving the technology into search and other online offerings. Kneecap Google? Testimony at trial included Apple vice president of services Eddy Cue revealing that Google's search traffic on Apple devices declined in April for the first time in over two testified that Google was losing ground to AI alternatives like ChatGPT and pressed rival attorneys regarding the potential for Google to share data as proposed by the DoJ in its recommended remedies."We're not looking to kneecap Google," DoJ attorney Adam Severt told the judge."But, we are looking to make sure someone can compete with Google."Schmidtlein contended that data Google is being asked to share contains more than just information about people's online searches, saying it would be tantamount to handing over the fruit of investments made over the course of decades."There are countless algorithms that Google engineers have invented that have nothing to do with click and query data," Schmidtlein said."Their remedy says we want to be on par with all of your ingenuity, and, respectfully your honour, that is not proportional to the conduct of this case."

AI's Unchecked Ascent: How Big Tech is outpacing the regulatory rulebook
AI's Unchecked Ascent: How Big Tech is outpacing the regulatory rulebook

The Hindu

time30 minutes ago

  • The Hindu

AI's Unchecked Ascent: How Big Tech is outpacing the regulatory rulebook

Artificial intelligence is experiencing a period of meteoric acceleration. Scarcely a week passes without fresh demonstrations of its expanding capabilities, as giants like OpenAI, Meta, Google, Anthropic and Microsoft unveil deeper integrations of their AI models, each flaunting ever more advanced capabilities. These firms' fortunes were built on data, both scraped from the internet and personal user details. This digital information now serves as the lifeblood for all the AI tools they deploy to the general public as tiered products. Some of these tech titans have faced scrutiny over their data practices, resulting in fines in certain instances and changes in their behavior in others. They have been questioned by regulators, courts, and the general public in several major economies. To understand the kind of data these firms collect and the methods they use, consider a 2020 class action lawsuit brought against Google. In Brown et al vs Google LLC, users alleged that the tech giant was tracking them even when they were browsing privately, using Google's 'incognito' mode. The users alleged that the tech giant was tracking their data, including shopping habits and other online hunts, despite them choosing to browse privately. The search giant reached a settlement in April, and lawyers of the plaintiffs valued the accord as high as $7.8 billion. While users will have to individually file for damages, the company agreed to delete troves of data from their records following the settlement. In another case, Google agreed to settle a case brought against it by Texas Attorney General Ken Paxton over deceptive location tracking. The Silicon Valley company agreed to pay $1.4 billion for illegally tracking location and biometric details of users without consent. Google is not alone. Llama AI owner Meta is another data guzzler. The social media giant was accused of using biometric data of users illegally. The company agreed to pay $1.4 billion and sought to deepen its business in the state of Texas. The settlement route Both Google and Meta have denied any wrongdoing. This method of making out of court settlement coupled with denying wrongdoing only emboldens the tech giants. By settling, these companies avoid creating legal precedents that could be used against them or the broader tech industry in future cases. A definitive court ruling against their data practices could open the floodgates for similar lawsuits. If Google and Meta's legal woes are largely concerned with user data, OpenAI, the standard-bearer of AI's rapid advance, finds itself contesting lawsuits that probe the very foundations of its training methodologies. Multiple class-action suits accuse the company of illicitly scraping vast quantities of personal data from the internet without consent to train its large language models. High-profile authors and media organisations, including The New York Times, have joined this legal fray, alleging copyright infringement and claiming their intellectual property was unlawfully used to construct the OpenAIs' ChatGPT. The copyright battles aren't limited to the U.S. Indian book publishers and their international counterparts filed a copyright lawsuit against OpenAI earlier this year, while publisher Ziff Davis sued OpenAI for copyright infringement in April, adding to the web of high-stakes copyright cases. These cases starkly illuminate the conflict between the AI industry's perceived hunger for limitless data and established protections for personal information and intellectual property. Even as litigation mounts, OpenAI, Google and Meta's AI development and deployment continue, seemingly undeterred. Oblivious to these legal and regulatory threats, tech giants appear to operate in a realm where conventional constraints are less binding. They not only continue to enhance their AI models but deploy them with ever-greater velocity even as legal frameworks struggle to catch up or even define the parameters of a race that is already decisively underway. The EU gold-standard tested Perhaps, an answer could lie in someplace across the Atlantic, where Europe's General Data Protection Regulation (GDPR) represents a robust attempt to tether data use to individual rights. Penalties under GDPR can be formidable, and the EU has been moving beyond GDPR violations to broader digital market competition issues. Just this year, the EU fined Meta over the company's user consent policy, which violated the bloc's Digital Markets Act. The EU's scrutiny is not confined to American firms. Complaints have also targeted Chinese tech companies like TikTok and SHEIN, with allegations of unlawful data exports. While GDPR has undeniably compelled companies to adjust certain practices, the broader AI industry, particularly builders of foundational models, has continued its global expansion with little apparent deceleration. Moreover, the ultimate efficacy of Europe's direct AI regulation remains an open question, with the EU's AI Act not slated for full implementation until August 2025. This dynamic is mirrored in other significant economies. India, with its Digital Personal Data Protection Act, 2023, is navigating this regulatory maze, formalising a data protection regime. The Act aims for a comprehensive framework, balancing consent requirements with provisions for future flexibility, thus attempting a delicate calibration between control and encouragement. India aims to be both a regulator and an important AI player. China, too, has implemented stringent data privacy rules that make it difficult for foreign firms to transfer 'significant data'. While China is strict about data transfers from its soil, the country has given AI development paramount strategic importance by support local firms to harness latest advances in emerging technologies. And as in the U.S., the firms investing most heavily in AI are often those with the largest data troves. Thus, while courtrooms bustle and regulators issue stern pronouncements, AI giants forge ahead, relentlessly refining models and deploying them at remarkable speeds. Legal challenges, however significant, often resemble the wake behind a rapidly advancing ship rather than a rudder steering its course. It is abundantly clear that privacy laws and regulatory frameworks are struggling to keep pace. The fundamental truth is that Big Tech's AI innovation cycle currently far outstrips the slower, more deliberative cadence of legal and ethical calibration. In this race, user privacy and broader societal guardrails risk becoming afterthoughts—issues to be managed or litigated post hoc, rather than foundational principles guiding AI's unchecked and transformative ascent.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store