
Bloomberg Tech: Nvidia, AMD AI Deal
Bloomberg Tech: Nvidia, AMD AI Deal
Bloomberg Tech Nvidia, AMD AI Deal Arrow Right
33:47
Bloomberg's Caroline Hyde and Ed Ludlow discuss Nvidia's and AMD's agreement to pay 15% of their revenues from Chinese artificial intelligence chip sales to the US government. Analysts and investors react to the 'unprecedented' deal. Plus, Intel CEO Lip-Bu Tan is set to meet with President Donald Trump, after the US leader called his resignation.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
10 minutes ago
- Yahoo
Perplexity AI offers $34.5bn to acquire Google Chrome
AI-powered search engine Perplexity AI has made an unsolicited offer of $34.5bn to acquire Alphabet's Chrome browser. Perplexity's CEO, Aravind Srinivas, is known for aggressive acquisition strategies, as evidenced by the company's previous offer for TikTok US. Tech industry's giants such as OpenAI and Yahoo, alongside Apollo Global Management, are also showing interest in Chrome. Alphabet is yet to respond to the bid, and Chrome is not currently up for sale. The technology conglomerate plans to appeal a US court ruling that found it held an unlawful monopoly in online search. The Justice Department's case against Google includes a potential Chrome divestiture as a remedy, which could influence the outcome of Perplexity's offer. Despite not disclosing its funding plan, Perplexity has raised approximately $1bn from investors including Nvidia and SoftBank. The company claims that multiple funds are ready to finance the deal in full. Perplexity's proposal, which notably lacks an equity component, aims to maintain user choice and address future competition concerns. However; analysts are sceptical about Google's willingness to part with Chrome, considering its strategic importance to the company's AI initiatives, according to Reuters. The legal landscape also presents challenges, with a federal ruling on the Google search antitrust case expected soon. Perplexity, with its own AI browser Comet, aims to harness Chrome's more than three billion users to strengthen its position against larger competitors. The company has committed to keeping Chrome's underlying code, Chromium, open source and plans to invest $3bn over two years while maintaining Chrome's default search engine, as per a term sheet seen by Reuters. In July 2025, Perplexity AI obtained new funding, bringing its valuation to $18bn. The company secured $100m in fresh capital as an extension of a previous round. "Perplexity AI offers $34.5bn to acquire Google Chrome" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Vox
11 minutes ago
- Vox
ChatGPT's update brings us one step closer to living in the movie Her
is a senior technology correspondent at Vox and author of the User Friendly newsletter. He's spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice. When it came out in 2013, the movie Her was set in the 'slight future.' That 'slight future' is, astonishingly, here. After all, AI-powered chatbots actually are a real thing now, and people are falling in love with them. It's remarkable that the 2013 Spike Jonze sci-fi romance about a lonely mustachioed man, played by Joaquin Phoenix, talking to a robot in an earbud proved to be so prescient. Still, much of our AI-dominated future is still taking shape. Vox Culture Culture reflects society. Get our best explainers on everything from money to entertainment to what everyone is talking about online. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. AI will certainly shape the culture of the next 25 years, but its biggest transformations to our world still remain to be seen. Even in its most rudimentary iterations, the technology has caused us to question our grip on reality. One of the first moments that AI broke through as a cultural force was when a fake image of Pope Francis wearing a white puffy Balenciaga coat fooled the entire internet a couple of years ago (if only because it avoided the common AI mistake of six-fingered hands). On the darker end of the spectrum, nonconsensual deepfake porn created using AI has become nearly impossible to keep offline, prompting new legislation to be enacted banning the practice. As people turn to AI for companionship, just like Phoenix's character did in Her, some of them are losing touch with reality, becoming delusional, and in one case, an AI chatbot has been linked to a teen's suicide. These developments inevitably seem to signal that a more seismic shift is on the way. AI will certainly shape the culture of the next 25 years, but its biggest transformations to our world still remain to be seen. How will all of that play out? Well, that's the thing about the future: We won't know until it's barrelling down on us. In cultural terms, that could mean we're watching movies that are all at least partially AI-generated, hanging out with AI friends online, and listening to AI-generated music more than human bands.


Time Magazine
12 minutes ago
- Time Magazine
Using AI Made Doctors Worse at Spotting Cancer Without Assistance
Health practitioners, companies, and others have for years hailed the potential benefits of AI in medicine, from improving medical imaging to outperforming doctors at diagnostic assessments. The transformative technology has even been predicted by AI enthusiasts to one day help find a 'cure to cancer.' But a new study has found that doctors who regularly used AI actually became less skilled within months. The study, which was published on Wednesday in the Lancet Gastroenterology and Hepatology journal, found that over the course of six months, clinicians became over-reliant on AI recommendations and became themselves 'less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.' It's the latest study to demonstrate potential adverse outcomes on AI users. An earlier study by the Massachusetts Institute of Technology found that ChatGPT eroded critical thinking skills. How the study was conducted Researchers across various European institutions conducted an observational study surveying four endoscopy centers in Poland that participated in the Artificial Intelligence in Colonoscopy for Cancer Prevention (ACCEPT) trial. The study was funded by the European Commission and Japan Society for the Promotion of Science. As part of the trial, the centers had introduced AI tools for the detection of polyps—growths that can be benign or cancerous—in late 2021. The study looked at 1,443 non-AI-assisted colonoscopies out of a total 2,177 colonoscopies conducted between September 2021 and March 2022. The colonoscopies were performed by 19 experienced endoscopists. Researchers compared the quality of colonoscopy conducted three months before and three months after AI was implemented. Colonoscopies were conducted either with or without AI assistance, at random. Of those conducted without AI assistance, 795 were conducted before regular AI use was implemented and 648 were conducted after the AI tools were introduced. What the study found Three months before AI was introduced, the adenoma detection rate (ADR) was around 28%. Three months after AI was introduced, the rate dropped to 22% when clinicians were unassisted by AI. ADR is a commonly used quality indicator for colonoscopies and represents 'the proportion of screening colonoscopies performed by a physician that detect at least one histologically confirmed colorectal adenoma or adenocarcinoma.' Adenomas are precancerous growths, and a higher ADR is associated with a lower risk of colorectal cancer. The study found that AI did help endoscopists with detection when used, but once the assistance was removed, clinicians were worse at detection. Researchers attributed it to 'the natural human tendency to over-rely' on the recommendations of decision support systems like AI. 'Imagine that you want to travel anywhere, and you're unable to use Google Maps,' Marcin Romańczyk, co-author of the study and an assistant professor at the Medical University of Silesia, told MedPage Today. 'We call it the Google Maps effect. We try to get somewhere, and it's impossible to use a regular map. It works very similarly.' Implications of the study Omer Ahmad, a consultant gastroenterologist at University College Hospital London who wrote an editorial alongside the study but was not involved in its research, tells TIME that it's likely that exposure to AI weakened doctors' visual search habits and alerting gaze patterns, which are critical for detecting polyps. 'In essence, dependence on AI detection could dull human pattern recognition,' Ahmad says. He adds that regular use of AI could also 'reduce diagnostic confidence' when AI assistance is withdrawn, or that the endoscopists' skill of manoeuvring the colonoscope could be reduced. In comments to the Science Media Center (SMC), Catherine Menon, principal lecturer at the University of Hertfordshire's Department of Computer Science, said: 'Although de-skilling resulting from AI use has been raised as a theoretical risk in previous studies, this study is the first to present real-world data that might potentially indicate de-skilling arising from the use of AI in diagnostic colonoscopies.' Menon raised concerns that overreliance on AI could leave health practitioners at risk to technological compromise. Other experts are more cautious about drawing conclusions from a single study. Venet Osmani, a professor of clinical AI and machine learning at Queen Mary University of London, noted to SMC that the total number of colonoscopies—including both AI-assisted and non-AI-assisted ones—increased over the course of the study. The increased workload, Osmani suggested, could have led to clinician fatigue and poorer detection rates. Allan Tucker, a professor of artificial intelligence at Brunel University of London, also noted that with AI assistance, clinician performance improved overall. Concerns about deskilling due to automation bias, added Tucker to SMC, 'is not unique to AI systems and is a risk with the introduction of any new technology.' 'The ethical question then is whether we trust AI over humans,' said Tucker. 'Often, we expect there to be a human overseeing all AI decision-making but if the human experts are putting less effort into their own decisions as a result of introducing AI systems this could be problematic.' 'This is not simply about monitoring technology,' says Ahmad. 'It's about navigating the complexities of a new human-AI clinical ecosystem.' Establishing safeguards is critical, he adds, suggesting that beyond this study, people may need to focus on 'preserving essential skills in a world where AI becomes ubiquitous.'