
NYT reaches AI licensing deal with Amazon
The New York Times has inked a multiyear AI licensing deal with Amazon to bring its journalism to a number of Amazon-owned products and experiences, the company said Thursday.
Why it matters: It's the Times' first-ever deal with an AI company for its content.
The newspaper company is currently engaged in a copyright infringement lawsuit with OpenAI and Microsoft.
Zoom in: The licensing deal will give Amazon access to editorial content from the New York Times, its cooking app, and its sports site, The Athletic, for AI-related uses, the company said.
The content will be used to fuel real-time answers to user queries via summaries and short excerpts of Times content that appear across Amazon products and services, such as the Alexa voice assistant.
The deal also gives Amazon access to the Times' content for training its large language models.
Between the lines: When asked why the deal doesn't include content from the Times' consumer recommendation site, Wirecutter, a spokesperson told Axios that "Amazon and Wirecutter have a longstanding relationship."
The big picture: More news companies are finding ways to strike deals with some AI firms while taking legal action against others.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
35 minutes ago
- Forbes
Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do
The AI symbol sits at the heart of a circle formed by bright yellow foldable caution signs adorned ... More with exclamation marks. This image creatively conveys the urgent need for awareness and careful consideration of AI's rapid growth and its implications. The design's high impact, with its strong contrast and focal point, makes it an effective tool for raising awareness or sparking conversations around technology, security, and innovation. Perfect for customizable content with plenty of space for additional messaging or branding. As AI embeds deeper into leadership workflows, a subtle form of decision drift is taking hold. Not because the tools are flawed but because we stop questioning them. Their polish is seductive. Their speed, persuasive. But when language replaces thought, clarity no longer guarantees correctness. In July 2023, the Chicago Sun-Times published an AI-generated summer reading list. The summaries were articulate. The titles sounded plausible. But only five of the fifteen books were real. The rest? Entirely made up: fictional authors, fabricated plots, polished prose built on nothing. It sounded smart. It wasn't. That's the risk. Now imagine an executive team building its strategy on the same kind of output. It's not fiction anymore. It's a leadership risk. And it's happening already. Quietly. Perceptibly. In organizations where clarity once meant confidence and strategy was something you trusted. Not just in made-up book titles but in the growing gap between what sounds clear and what's actually correct. Large language models aren't fact checkers. They're pattern matchers. They generate language based on probability, not precision. What sounds coherent may not be correct. The result is a stream of outputs that look strategic but rest on shaky ground. This isn't a call to abandon AI. But it is a call to re-anchor how we use it. To ensure leaders stay accountable. To ensure AI stays a tool, not a crutch. I'm not saying AI shouldn't inform decisions. But it must be paired with human intuition, sense making and real dialogue. The more confident the language, the more likely it is to go unquestioned. Model collapse is no longer theoretical. It's already happening. It begins when models are trained on outputs from other models or worse, on their own recycled content. Over time, distortions multiply. Edge cases vanish. Rare insights decay. Feedback loops breed repetition. Sameness. False certainty. Businessman with white cloud instead of head on blue background. Businessman and management. ... More Business and commerce. Digital art. As The Register warned, general purpose AI may already be declining in quality, not in tone but in substance. What remains looks fluent. But it says less. That's just the mechanical part. The deeper concern is how this affects leaders. When models feed on synthetic data and leaders feed on those outputs, what you get isn't insight. It's reflection. Strategy becomes a mirror, not a map. And we're not just talking bias or hallucinations. As copyright restrictions tighten and human-created content slows, the pool of original data shrinks. What's left is synthetic material recycled over and over. More polish. Less spark. According to researchers at Epoch, high quality training data could be exhausted by 2026 to 2032. When that happens, models won't be learning from the best of what we know. They'll be learning from echoes. Developers are trying to slow this collapse. Many already are, by protecting non-AI data sources, refining synthetic inputs and strengthening governance. But the impending collapse signals something deeper. A reminder that the future of intelligence must remain blended — human machine, not machine alone. Intuitive, grounded and real. Psychologists like Kahneman and Tversky warned us long ago about the framing trap: the way a question is asked shapes the answer. A 20 percent chance of failure feels different than an 80 percent chance of success, even if it's the same data. AI makes this trap faster and more dangerous. Because now, the frame itself is machine generated. A biased prompt. A skewed training set. A hallucinated answer. And suddenly, a strategy is shaped by a version of reality that never existed. Ask AI to model a workforce reduction plan. If the prompt centers on financials, the reply may omit morale, long-term hiring costs or reputational damage. The numbers work. The human cost disappears. AI doesn't interrupt. It doesn't question. It reflects. If a leader seeks validation, AI will offer it. The tone will align. The logic will sound smooth. But real insight rarely feels that easy. That's the risk — not that AI is wrong, but that it's too easily accepted as right. When leaders stop questioning and teams stop challenging, AI becomes a mirror. It reinforces assumptions. It amplifies bias. It removes friction. That's how decision drift begins. Dialogue becomes output. Judgment becomes approval. Teams fall quiet. Cultures that once celebrated debate grow obedient. And something more vital begins to erode: intuition. The human instinct for context. The sense of timing. The inner voice that says something's off. It all gets buried beneath synthetic certainty. To stop flawed decisions from quietly passing through AI-assisted workflows, every leader should ask: AI-generated content is already shaping board decks, culture statements and draft policies. In fast-paced settings, it's tempting to treat that output as good enough. But when persuasive language gets mistaken for sound judgment, it doesn't stay in draft mode. It becomes action. Garbage in. Polished out. Then passed as policy. This isn't about intent. It's about erosion. Quiet erosion in systems that reward speed, efficiency and ease over thoughtfulness. And then there's the flattery trap. Ask AI to summarize a plan or validate a strategy, and it often echoes the assumptions behind the prompt. The result? A flawed idea wrapped in confidence. No tension. No resistance. Just affirmation. That's how good decisions fail — quietly, smoothly and without a single raised hand in the room. Leadership isn't about having all the answers. It's about staying close to what's real and creating space for others to do the same. The deeper risk of AI isn't just in false outputs. It's in the cultural drift that happens when human judgment fades. Questions stop. Dialogue thins. Dissent vanishes. Leaders must protect what AI can't replicate — the ability to sense what's missing. To hear what's not said. To pause before acting. To stay with complexity. AI can generate content. But it can't generate wisdom. The solution isn't less AI. It's better leadership. Leaders who use AI not as final word but as provocateur. As friction. As spark. In fact, human-generated content will only grow in value. Craft will matter more than code. What we'll need most is original thought, deep conversation and meaning making — not regurgitated text that sounds sharp but says nothing new. Because when it comes to decisions that shape people, culture and strategy, only human judgment connects the dots that data can't see. In the end, strategy isn't what you write. It's what you see. And to see clearly in the age of AI, you'll need more than a prompt. You'll need presence. You'll need discernment. Neither can be AI trained. Neither can be outsourced.


Business Insider
an hour ago
- Business Insider
Winners and Losers: Energy Stocks Soared and Healthcare Crashed in May
May was a month to remember for the U.S. stock market as the benchmark S&P 500 index posted a gain of 6% and had its best showing since 1990. But, as always, there were winners and losers among equities. Confident Investing Starts Here: The big winners among U.S. stocks during May were energy and technology stocks that are helping to power the artificial intelligence (AI) revolution. Specifically, NRG Energy (NRG) saw its share price rise 42% in the month and Constellation Energy (CEG) was close behind with a 37% gain. Both companies power AI data centers through cleaner energy sources such as natural gas. Other big winners in May were previously downtrodden technology stocks that are also associated with AI. These include data storage firm Seagate Technology (STX), whose share price increased 37% and outpaced AI chipmaker Nvidia (NVDA). Super Micro Computer (SMCI), which makes AI servers that run Nvidia microchips, also had a big month, with its stock running 26% higher. Healthcare Loses Out On the flipside, healthcare was the worst-performing sector of the market in May. The declines were led by insurer UnitedHealth Group (UNH), whose share price fell 27% amid worries after the company slashed its full-year guidance. Also dragging healthcare lower was pharmaceutical giant Eli Lilly (LLY), whose stock dropped 18% after the Trump administration said it wants prescription drug prices lower. Other healthcare stocks that took a drubbing in May include retail pharmacy chain CVS Health (CVS), and healthcare insurer Humana (HUM). The lone bright spot among healthcare stocks was Insulet (PODD), whose share price vaulted 29% higher on strong financial results. The stock has been on an upswing since the U.S. Food and Drug Administration (FDA) approved its insulin system for Type 2 Diabetes last summer. Is LLY Stock a Buy? The stock of Eli Lilly has a consensus Strong Buy recommendation among 18 Wall Street analysts. That rating is based on 16 Buy, one Hold, and one Sell recommendations issued in the last 12 months. The average LLY price target of $1,003.14 implies 34.82% upside from current levels.
Yahoo
2 hours ago
- Yahoo
Judge wrestles with far-reaching remedy proposals in US antitrust case against Google
WASHINGTON (AP) — The fate and fortunes of one of the world's most powerful tech companies now sit in the hands of a U.S. judge wrestling with whether to impose far-reaching changes upon Google in the wake of its dominant search engine being declared an illegal monopoly. U.S. District Judge Amit Mehta heard closing arguments Friday from Justice Department lawyers who argued that a radical shake-up is needed to promote a free and fair market. Their proposed remedies include a ban on Google paying to lock its search engine in as the default on smart devices and an order requiring the company to sell its Chrome browser. Google's legal team argued that only minor concessions are needed and urged Mehta not to unduly punish the company with a harsh ruling that could squelch future innovations. Google also argued that upheaval triggered by advances in artificial intelligence already is reshaping the search landscape, as conversational search options are rolling out from AI startups that are hoping to use the Department of Justice's four-and-half-year-old case to gain the upper hand in the next technological frontier. It was an argument that Mehta appeared to give serious consideration as he marveled at the speed at which the AI industry was growing. He also indicated he was still undecided on how much AI's potential to shake up the search market should be incorporated in his forthcoming ruling. 'This is what I've been struggling with,' Mehta said. Mehta spoke frequently at Friday's hearing, often asking probing and pointed questions to lawyers for both sides, while hinting that he was seeking a middle ground between the two camps' proposed remedies. 'We're not looking to kneecap Google,' the judge said, adding that the goal was to 'kickstart' competitors' ability to challenge the search giant's dominance. Mehta will spend much of the summer mulling a decision that he plans to issue before Labor Day. Google has already vowed to appeal the ruling that branded its search engine as a monopoly, a step it can't take until the judge orders a remedy. Google's attorney John Schmidtlein asked Mehta to put a 60-day delay on implementing any proposed changes, which Justice prosecutor David Dahlquist immediately objected to. 'We believe the market's waited long enough,' Dahlquist said. While both sides of this showdown agree that AI is an inflection point for the industry's future, they have disparate views on how the shift will affect Google. The Justice Department contends that AI technology by itself won't rein in Google's power, arguing additional legal restraints must be slapped on a search engine that's the main reason its parent company, Alphabet Inc., is valued at $2 trillion. Google has already been deploying AI to transform its search engine i nto an answer engine, an effort that has so far helped maintain its perch as the internet's main gateway despite inroads being made by alternatives from the likes of OpenAI and Perplexity. The Justice Department contends a divestiture of the Chrome browser that Google CEO Sundar Pichai helped build nearly 20 years ago would be among the most effective countermeasures against Google continuing to amass massive volumes of browser traffic and personal data that could be leveraged to retain its dominance in the AI era. Executives from both OpenAi and Perplexity testified last month that they would be eager bidders for the Chrome browser if Mehta orders its sale. The debate over Google's fate also has pulled in opinions from Apple, mobile app developers, legal scholars and startups. Apple, which collects more than $20 billion annually to make Google the default search engine on the iPhone and its other devices, filed briefs arguing against the Justice Department's proposed 10-year ban on such lucrative lock-in agreements. Apple told the judge that prohibiting the contracts would deprive the company of money that it funnels into its own research, and that the ban might even make Google even more powerful because the company would be able to hold onto its money while consumers would end up choosing its search engine anyway. The Cupertino, California, company also told the judge a ban wouldn't compel it to build its own search engine to compete against Google. In other filings, a group of legal scholars said the Justice Department's proposed divestiture of Chrome would be an improper penalty that would inject unwarranted government interference in a company's business. Meanwhile, former Federal Trade Commission officials James Cooper and Andrew Stivers warned that another proposal that would require Google to share its data with rival search engines 'does not account for the expectations users have developed over time regarding the privacy, security, and stewardship' of their personal information. Mehta said Friday that compared to some of the Justice Department's other proposals, there was 'less speculation' about what might happen in the broader market if Google were forced to divest of Chrome. Schmidtlein said that was untrue, and such a ruling would be a wild overreach. 'I think that would be inequitable in the extreme,' he said. Dahlquist mocked some of the arguments against divesting Chrome. 'Google thinks it's the only one who can invest things,' he said.