
Apple Expands ChatGPT Deal and Mimics Google in AI Comeback Attempt
The company unveiled a number of AI features at WWDC, though they weren't the focus of event
By
Save
Apple Inc. introduced a number of AI-powered features during the company's Worldwide Developers Conference on Monday, even as long-awaited enhancements to its Siri voice assistant remain far off.
The announcements included an expanded relationship with ChatGPT maker OpenAI, which first teamed up with Apple last year and underpins many of the new capabilities that are coming to iOS 26, iPadOS 26, macOS 26 and visionOS 26 this fall.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
9 minutes ago
- Forbes
Apple Music Gets 3 New Features With iOS 26: WWDC 2025 Recap
Apple Music in iOS 26 Apple's latest iOS 26 update for iPhone brings a slew of changes to system apps. It makes the Photos app usable again and includes new features within Messages, Wallet, Maps, and more. But with the storm of information, one of the apps that could've gone under the radar during Apple's WWDC announcement is Apple Music. The Cupertino tech giant is supercharging Apple Music with new capabilities. To be specific, Apple is adding three new features to its Music app with iOS 26 and here's why they matter. This feature helps you understand the words in your favorite non-English tracks. If you're listening to Spanish country music or a French romantic song, you'll now see the lyrics translated in real time. It's built to make music more accessible and lyrics understandable, regardless of language barriers. The translations appear in a smaller font than the usual lyrics font. I installed the iOS 26 Developer Beta 1 to check if it also translates Hindi songs (Hindi being my primary language) to English. Thankfully, it doesn't. My Bollywood music retains its original lyrics while I can get Gasolina translated in real time. Apple Music's Lyrics Pronunciation does exactly what it says—it teaches you how to sing the lyrics correctly. I'm looking forward to understanding how it works so I can sing along to my favorite non-English and non-Hindi songs without butchering the chorus. Under Settings > Apps > Music, Apple now offers a Lyrics menu. You can choose whether lyrics or pronunciation appears larger when both are displayed. I haven't seen pronunciation in Music so far, but I expect it to roll out to more songs gradually. Automix in Apple Music on iOS 26 can turn your playlists into a DJ-like continuous listening experience. It lives under Settings > Apps > Music > Song Transitions, which replaces the Crossfade toggle. Using time stretching and beat matching, Apple Music now transitions between songs like a DJ would. It can theoretically remove silences between song transitions and deliver a clean, flowing vibe to your playlist. Automix is stated to transition songs 'at the perfect moment, based on analytics of the key and tempo of the music.' You can still find and toggle Crossfade just below Automix. All of these new Apple Music features will come to supported iPhones with the iOS 26 update.


Forbes
30 minutes ago
- Forbes
Intelligence Illusion: What Apple's AI Study Reveals About Reasoning
Concept of the diversity of talents and know-how, with profiles of male and female characters ... More associated with different brains. The gleaming veneer of artificial intelligence has captivated the world, with large language models producing eloquent responses that often seem indistinguishable from human thought. Yet beneath this polished surface lies a troubling reality that Apple's latest research has brought into sharp focus: eloquence is not intelligence, and imitation is not understanding. Apple's new study, titled "The Illusion of Thinking," has sent shockwaves through the AI community by demonstrating that even the most sophisticated reasoning models fundamentally lack genuine cognitive abilities. This revelation validates what prominent researchers like Meta's Chief AI Scientist Yann LeCun have been arguing for years—that current AI systems are sophisticated pattern-matching machines rather than thinking entities. The Apple research team's findings are both methodical and damning. By creating controlled puzzle environments that could precisely manipulate complexity while maintaining logical consistency, they revealed three distinct performance regimes in Large Reasoning Models . In low-complexity tasks, standard models actually outperformed their supposedly superior reasoning counterparts. Medium-complexity problems showed marginal benefits from additional "thinking" processes. But most tellingly, both model types experienced complete collapse when faced with high-complexity tasks. What makes these findings particularly striking is the counter-intuitive scaling behavior the researchers observed. Rather than improving with increased complexity as genuine intelligence would, these models showed a peculiar pattern: their reasoning effort would increase up to a certain point, then decline dramatically despite having adequate computational resources. This suggests that the models weren't actually reasoning at all— they were following learned patterns that broke down when confronted with novel challenges. The study exposed fundamental limitations in exact computation, revealing that these systems fail to use explicit algorithms and reason inconsistently across similar puzzles. When the veneer of sophisticated language is stripped away, what remains is a sophisticated but ultimately hollow mimicry of thought. These findings align perfectly with warnings that Yann LeCun and other leading AI researchers have been voicing for years. LeCun has consistently argued that current LLMs will be largely obsolete within five years, not because they'll be replaced by better versions of the same technology, but because they represent a fundamentally flawed approach to artificial intelligence. The core issue isn't technical prowess — it's conceptual. These systems don't understand; they pattern-match. They don't reason; they interpolate from training data. They don't think; they generate statistically probable responses based on massive datasets. The sophistication of their output masks the absence of genuine comprehension, creating what researchers now recognize as an elaborate illusion of intelligence. This disconnect between appearance and reality has profound implications for how we evaluate and deploy AI systems. When we mistake fluency for understanding, we risk making critical decisions based on fundamentally flawed reasoning processes. The danger isn't just technological—it's epistemological. Perhaps most unsettling is how closely this AI limitation mirrors a persistent human cognitive bias. Just as we've been deceived by AI's articulate responses, we consistently overvalue human confidence and extroversion, often mistaking verbal facility for intellectual depth. The overconfidence bias represents one of the most pervasive flaws in human judgment, where individuals' subjective confidence in their abilities far exceeds their objective accuracy. This bias becomes particularly pronounced in social and professional settings, where confident, extroverted individuals often command disproportionate attention and credibility. Research consistently shows that we tend to equate confidence with competence, volume with value, and articulateness with intelligence. The extroverted individual who speaks first and most frequently in meetings often shapes group decisions, regardless of the quality of their ideas. The confident presenter who delivers polished but superficial analysis frequently receives more positive evaluation than the thoughtful introvert who offers deeper insights with less theatrical flair. This psychological tendency creates a dangerous feedback loop. People with low ability often overestimate their competence (the Dunning-Kruger effect), while those with genuine expertise may express appropriate uncertainty about complex issues. The result is a systematic inversion of credibility, where those who know the least speak with the greatest confidence, while those who understand the most communicate with appropriate nuance and qualification. The parallel between AI's eloquent emptiness and our bias toward confident communication reveals something profound about the nature of intelligence itself. Both phenomena demonstrate how easily we conflate the appearance of understanding with its substance. Both show how sophisticated communication can mask fundamental limitations in reasoning and comprehension. Consider the implications for organizational decision-making, educational assessment, and social dynamics. If we consistently overvalue confident presentation over careful analysis—whether from AI systems or human colleagues—we systematically degrade the quality of our collective reasoning. We create environments where performance theater takes precedence over genuine problem-solving. The Apple study's revelation that AI reasoning models fail when faced with true complexity mirrors how overconfident individuals often struggle with genuinely challenging problems while maintaining their persuasive veneer. Both represent sophisticated forms of intellectual imposture that can persist precisely because they're so convincing on the surface. Understanding these limitations—both artificial and human—opens the door to more authentic evaluation of intelligence and reasoning. True intelligence isn't characterized by unwavering confidence or eloquent presentation. Instead, it manifests in several key ways: Genuine intelligence embraces uncertainty when dealing with complex problems. It acknowledges limitations rather than concealing them. It demonstrates consistent reasoning across different contexts rather than breaking down when patterns become unfamiliar. Most importantly, it shows genuine understanding through the ability to adapt principles to novel situations. In human contexts, this means looking beyond charismatic presentation to evaluate the underlying quality of reasoning. It means creating space for thoughtful, measured responses rather than rewarding only quick, confident answers. It means recognizing that the most profound insights often come wrapped in appropriate humility rather than absolute certainty. For AI systems, it means developing more rigorous evaluation frameworks that test genuine understanding rather than pattern matching. It means acknowledging current limitations rather than anthropomorphizing sophisticated text generation. It means building systems that can genuinely reason rather than simply appearing to do so. The convergence of Apple's AI findings with psychological research on human biases offers valuable guidance for navigating our increasingly complex world. Whether evaluating AI systems or human colleagues, we must learn to distinguish between performance and competence, between eloquence and understanding. This requires cultivating intellectual humility – the recognition that genuine intelligence often comes with appropriate uncertainty, that the most confident voices aren't necessarily the most credible, and that true understanding can be distinguished from sophisticated mimicry through careful observation and testing. To distinguish intelligence from imitation in an AI-infused environment we need to invest in hybrid intelligence, which arises from the complementarity of natural and artificial intelligences – anchored in the strength and limitations of both.


Digital Trends
42 minutes ago
- Digital Trends
Why macOS Tahoe is a big deal for Intel Macs
Apple's WWDC event kicked off on Monday with the usual slew of fresh announcements and updates showcasing the company's software plans for the year ahead. And as with every WWDC keynote, the upcoming shift to new software also signaled diminishing support for older Apple devices. Recommended Videos Case in point, Apple's upcoming macOS 26 (aka macOS Tahoe) will be the final macOS release for Mac computers powered by Intel processors. The tech giant ditched Intel chips in favor of its own custom-designed Apple Silicon chips, starting with the M1 in 2020 before completing the transition in 2023. Matthew Firlik, Apple's senior director of developer relations, dropped the news during the WWDC Platforms State of the Union keynote on Monday. Tahoe will actually be compatible with only four Intel Macs, which came out in 2019 and 2020, and the updated operating system will not run on Intel versions of the MacBook Air and Mac mini. Specifically, macOS Tahoe will be compatible with: – MacBook Air with Apple Silicon (2020 and later) – MacBook Pro with Apple Silicon (2020 and later) – MacBook Pro (16‑inch, 2019) – MacBook Pro (13‑inch, 2020; four Thunderbolt 3 ports) – iMac (2020 and later) – Mac mini (2020 and later) – Mac Studio (2022 and later) – Mac Pro (2019 and later) Furthermore, there are Mac machines that support macOS Sequoia (the current macOS version) but won't be able to get macOS Tahoe, specifically: – MacBook Air (Retina, 13-inch, 2020) – MacBook Pro (13-inch, 2018; four Thunderbolt 3 ports) – MacBook Pro (15-inch, 2018) – MacBook Pro (13-inch, 2019; four Thunderbolt 3 ports) – MacBook Pro (15-inch, 2019) – MacBook Pro (13-inch, 2019; two Thunderbolt 3 ports) – MacBook Pro (13-inch, 2020; two Thunderbolt 3 ports) – iMac (Retina 5K, 27-inch, 2019) – iMac (Retina 4K, 21.5-inch, 2019) – iMac Pro (2017) – Mac mini (2018) Owners of Intel Macs can still expect to get security updates for their machines for some time to come, but access to new features in macOS 27 — set to launch next year — won't be possible as compatibility will be limited to Apple Silicon devices. Tahoe, which lands for the latest Mac computers this fall, introduces a major redesign featuring a new Liquid Glass translucent aesthetic; enhanced customization options for folders, app icons, and Control Center; a new Phone app for Mac with iPhone call integration; a major update to Spotlight; expanded Apple Intelligence capabilities including Live Translation and intelligent Shortcuts; and improved Continuity experiences to boost productivity and personalization across Mac and iPhone devices.