
Apple WWDC: AI Competition to Stay Fierce, Analyst Says
Bloomberg Markets: The Close
Daniel Flax, Neuberger Berman, Senior Research Analyst speaks with Romaine Bostick and Alix Steel about Apple's WWDC. (Source: Bloomberg)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
9 minutes ago
- Forbes
Apple Music Gets 3 New Features With iOS 26: WWDC 2025 Recap
Apple Music in iOS 26 Apple's latest iOS 26 update for iPhone brings a slew of changes to system apps. It makes the Photos app usable again and includes new features within Messages, Wallet, Maps, and more. But with the storm of information, one of the apps that could've gone under the radar during Apple's WWDC announcement is Apple Music. The Cupertino tech giant is supercharging Apple Music with new capabilities. To be specific, Apple is adding three new features to its Music app with iOS 26 and here's why they matter. This feature helps you understand the words in your favorite non-English tracks. If you're listening to Spanish country music or a French romantic song, you'll now see the lyrics translated in real time. It's built to make music more accessible and lyrics understandable, regardless of language barriers. The translations appear in a smaller font than the usual lyrics font. I installed the iOS 26 Developer Beta 1 to check if it also translates Hindi songs (Hindi being my primary language) to English. Thankfully, it doesn't. My Bollywood music retains its original lyrics while I can get Gasolina translated in real time. Apple Music's Lyrics Pronunciation does exactly what it says—it teaches you how to sing the lyrics correctly. I'm looking forward to understanding how it works so I can sing along to my favorite non-English and non-Hindi songs without butchering the chorus. Under Settings > Apps > Music, Apple now offers a Lyrics menu. You can choose whether lyrics or pronunciation appears larger when both are displayed. I haven't seen pronunciation in Music so far, but I expect it to roll out to more songs gradually. Automix in Apple Music on iOS 26 can turn your playlists into a DJ-like continuous listening experience. It lives under Settings > Apps > Music > Song Transitions, which replaces the Crossfade toggle. Using time stretching and beat matching, Apple Music now transitions between songs like a DJ would. It can theoretically remove silences between song transitions and deliver a clean, flowing vibe to your playlist. Automix is stated to transition songs 'at the perfect moment, based on analytics of the key and tempo of the music.' You can still find and toggle Crossfade just below Automix. All of these new Apple Music features will come to supported iPhones with the iOS 26 update.


Forbes
30 minutes ago
- Forbes
Intelligence Illusion: What Apple's AI Study Reveals About Reasoning
Concept of the diversity of talents and know-how, with profiles of male and female characters ... More associated with different brains. The gleaming veneer of artificial intelligence has captivated the world, with large language models producing eloquent responses that often seem indistinguishable from human thought. Yet beneath this polished surface lies a troubling reality that Apple's latest research has brought into sharp focus: eloquence is not intelligence, and imitation is not understanding. Apple's new study, titled "The Illusion of Thinking," has sent shockwaves through the AI community by demonstrating that even the most sophisticated reasoning models fundamentally lack genuine cognitive abilities. This revelation validates what prominent researchers like Meta's Chief AI Scientist Yann LeCun have been arguing for years—that current AI systems are sophisticated pattern-matching machines rather than thinking entities. The Apple research team's findings are both methodical and damning. By creating controlled puzzle environments that could precisely manipulate complexity while maintaining logical consistency, they revealed three distinct performance regimes in Large Reasoning Models . In low-complexity tasks, standard models actually outperformed their supposedly superior reasoning counterparts. Medium-complexity problems showed marginal benefits from additional "thinking" processes. But most tellingly, both model types experienced complete collapse when faced with high-complexity tasks. What makes these findings particularly striking is the counter-intuitive scaling behavior the researchers observed. Rather than improving with increased complexity as genuine intelligence would, these models showed a peculiar pattern: their reasoning effort would increase up to a certain point, then decline dramatically despite having adequate computational resources. This suggests that the models weren't actually reasoning at all— they were following learned patterns that broke down when confronted with novel challenges. The study exposed fundamental limitations in exact computation, revealing that these systems fail to use explicit algorithms and reason inconsistently across similar puzzles. When the veneer of sophisticated language is stripped away, what remains is a sophisticated but ultimately hollow mimicry of thought. These findings align perfectly with warnings that Yann LeCun and other leading AI researchers have been voicing for years. LeCun has consistently argued that current LLMs will be largely obsolete within five years, not because they'll be replaced by better versions of the same technology, but because they represent a fundamentally flawed approach to artificial intelligence. The core issue isn't technical prowess — it's conceptual. These systems don't understand; they pattern-match. They don't reason; they interpolate from training data. They don't think; they generate statistically probable responses based on massive datasets. The sophistication of their output masks the absence of genuine comprehension, creating what researchers now recognize as an elaborate illusion of intelligence. This disconnect between appearance and reality has profound implications for how we evaluate and deploy AI systems. When we mistake fluency for understanding, we risk making critical decisions based on fundamentally flawed reasoning processes. The danger isn't just technological—it's epistemological. Perhaps most unsettling is how closely this AI limitation mirrors a persistent human cognitive bias. Just as we've been deceived by AI's articulate responses, we consistently overvalue human confidence and extroversion, often mistaking verbal facility for intellectual depth. The overconfidence bias represents one of the most pervasive flaws in human judgment, where individuals' subjective confidence in their abilities far exceeds their objective accuracy. This bias becomes particularly pronounced in social and professional settings, where confident, extroverted individuals often command disproportionate attention and credibility. Research consistently shows that we tend to equate confidence with competence, volume with value, and articulateness with intelligence. The extroverted individual who speaks first and most frequently in meetings often shapes group decisions, regardless of the quality of their ideas. The confident presenter who delivers polished but superficial analysis frequently receives more positive evaluation than the thoughtful introvert who offers deeper insights with less theatrical flair. This psychological tendency creates a dangerous feedback loop. People with low ability often overestimate their competence (the Dunning-Kruger effect), while those with genuine expertise may express appropriate uncertainty about complex issues. The result is a systematic inversion of credibility, where those who know the least speak with the greatest confidence, while those who understand the most communicate with appropriate nuance and qualification. The parallel between AI's eloquent emptiness and our bias toward confident communication reveals something profound about the nature of intelligence itself. Both phenomena demonstrate how easily we conflate the appearance of understanding with its substance. Both show how sophisticated communication can mask fundamental limitations in reasoning and comprehension. Consider the implications for organizational decision-making, educational assessment, and social dynamics. If we consistently overvalue confident presentation over careful analysis—whether from AI systems or human colleagues—we systematically degrade the quality of our collective reasoning. We create environments where performance theater takes precedence over genuine problem-solving. The Apple study's revelation that AI reasoning models fail when faced with true complexity mirrors how overconfident individuals often struggle with genuinely challenging problems while maintaining their persuasive veneer. Both represent sophisticated forms of intellectual imposture that can persist precisely because they're so convincing on the surface. Understanding these limitations—both artificial and human—opens the door to more authentic evaluation of intelligence and reasoning. True intelligence isn't characterized by unwavering confidence or eloquent presentation. Instead, it manifests in several key ways: Genuine intelligence embraces uncertainty when dealing with complex problems. It acknowledges limitations rather than concealing them. It demonstrates consistent reasoning across different contexts rather than breaking down when patterns become unfamiliar. Most importantly, it shows genuine understanding through the ability to adapt principles to novel situations. In human contexts, this means looking beyond charismatic presentation to evaluate the underlying quality of reasoning. It means creating space for thoughtful, measured responses rather than rewarding only quick, confident answers. It means recognizing that the most profound insights often come wrapped in appropriate humility rather than absolute certainty. For AI systems, it means developing more rigorous evaluation frameworks that test genuine understanding rather than pattern matching. It means acknowledging current limitations rather than anthropomorphizing sophisticated text generation. It means building systems that can genuinely reason rather than simply appearing to do so. The convergence of Apple's AI findings with psychological research on human biases offers valuable guidance for navigating our increasingly complex world. Whether evaluating AI systems or human colleagues, we must learn to distinguish between performance and competence, between eloquence and understanding. This requires cultivating intellectual humility – the recognition that genuine intelligence often comes with appropriate uncertainty, that the most confident voices aren't necessarily the most credible, and that true understanding can be distinguished from sophisticated mimicry through careful observation and testing. To distinguish intelligence from imitation in an AI-infused environment we need to invest in hybrid intelligence, which arises from the complementarity of natural and artificial intelligences – anchored in the strength and limitations of both.


Bloomberg
41 minutes ago
- Bloomberg
S&P 500 Ekes Out Gain as US-China Talks to Resume
Bloomberg Television brings you the latest news and analysis leading up to the final minutes and seconds before and after the closing bell on Wall Street. Today's guests are Ed Ludlow, Bloomberg News, Ross Gerber, Gerber Kawasaki, Daniel Flax, Neuberger Berman, Tony Wang, T. Rowe Price, Chris Palmeri, Bloomberg News, Sinead Colton Grant, BNY Wealth, Angelo Zino, CFRA Research, Justin Bibb, Cleveland Mayor & Leonardo Williams, Durham Mayor, Alicia Caldwell, Bloomberg News, Gregory Valliere, AGF Investments, Pooja Sriram, Barclays. (Source: Bloomberg)