logo
Apple WWDC 2025 Live Updates: New software for iPhones, iPads, MacBooks, Apple Watch

Apple WWDC 2025 Live Updates: New software for iPhones, iPads, MacBooks, Apple Watch

Time of India5 days ago

We are just hours away from WWDC 2025, which kicks later today, June 9, with Apple CEO Tim Cook's keynote. Apple is teasing WWDC25 with a new tagline: 'Sleek peek.' 'A week of technology and creativity. All online and free,' Apple says. Apple originally teased WWDC 2025 with the tagline 'on the horizon.' The updated 'sleek peek' tagline, however, appears to be a clearer reference to the expected redesign coming to all of its platforms this year. At WWDC 2025, Apple will announce its next round of software updates to iPhones, iPads, MacBooks, Apple Watch and Apple TV. This year Apple is expected to announce new naming for its OS updates, based on year. This means No iOS 19, but iOS 26. Speculations suggest they will likely be called: iOS 26, iPadOS 26, macOS 26, tvOS 26, watchOS 26 and visionOS 26. WWDC 2025 gets started at 10.30 PM IST. Stay with TOI Tech here for all the latest WWDC 2025 updates.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple Paper questions path to AGI, sparks division in GenAI group
Apple Paper questions path to AGI, sparks division in GenAI group

Time of India

timean hour ago

  • Time of India

Apple Paper questions path to AGI, sparks division in GenAI group

New Delhi: A recent research paper from Apple focusing on the limitations of large reasoning models in artificial intelligence has left the generative AI community divided, sparking significant debate whether the current path taken by AI companies towards artificial general intelligence is the right one to take. What did Apple find? The paper, titled The Illusion of Thinking, published earlier this week, demonstrates that even the most sophisticated large reasoning models do not genuinely think or reason in a human-like way. Instead, they excel at pattern recognition and mimicry, generating responses that only appear intelligent, but lack true comprehension or conceptual understanding. The study used controlled puzzle environments, such as the popular Tower of Hanoi puzzle, to systematically test reasoning abilities across varying complexities by large reasoning models such as OpenAI's o3 Mini, DeepSeek's R1, Anthropic's Claude 3.7 Sonnet and Google Gemini Flash. The findings show that while large reasoning and language models may handle simple or moderately complex tasks, they experience total failure when faced with high-complexity problems, which occur despite having sufficient computational resources. Widespread support for Apple's findings Gary Marcus, a cognitive scientist and a known sceptic of the claims surrounding large language models, views Apple's work as providing compelling empirical evidence that today's models primarily repeat patterns learned during training from vast datasets without genuine understanding or true reasoning capabilities. "If you can't use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI, current hype aside) solved with AI in 1957, and that first semester AI students solve routinely, the chances that models like Claude or o3 are going to reach AGI seem truly remote," Marcus wrote in his blog. Marcus' arguments are also echoed in earlier comments of Meta's chief AI scientist Yann LeCun, who has argued that current AI systems are mainly sophisticated pattern recognition tools rather than true thinkers. The release of Apple's paper ignited a polarised debate across the broader AI community, with many panning the design of the study than its findings. On the other hand... A published critique of the paper by researchers from Anthropic and San-Francisco based Open Philanthropy said the study has issues in the experimental design, that it overlooks output limits. In an alternate demonstration, the researchers tested the models on the same problems but allowed them to use code, resulting in high accuracy across all the tested models. The critique around the study's failure to take in the output limits and the limitations in coding by the models have also been highlighted by other AI commentators and researchers including Matthew Berman, a popular AI commentator and researcher. "SOTA models failed The Tower of Hanoi puzzle at a complexity threshold of >8 discs when using natural language alone to solve it. However, ask it to write code to solve it, and it flawlessly does up to seemingly unlimited complexity," Berman wrote in a post on X (formerly Twitter). Industry impact The study highlights Apple's more cautious approach to AI compared to rivals like Google and Samsung, who have aggressively integrated AI into their products. Apple's research explains its hesitancy to fully commit to AI, contrasting with the industry's prevailing narrative of rapid progress. Many questioned the timing of the release of the study, coinciding with Apple's annual WWDC event where it announces its next software updates. Chatter across online forums said the study was more about managing expectations in light of Apple's own struggles with AI. That said, practitioners and business users argue that the findings do not change the immediate utility of AI tools for everyday applications.

Why superintelligent AI isn't taking over anytime soon
Why superintelligent AI isn't taking over anytime soon

Mint

time3 hours ago

  • Mint

Why superintelligent AI isn't taking over anytime soon

A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn't enough to claim that their AI is the best. All three have recently insisted that it's going to be so good, it will change the very fabric of society. Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg's dream of AI superintelligence—that is, an AI smarter than we are. 'Humanity is close to building digital superintelligence," Altman declared in an essay this week, and this will lead to 'whole classes of jobs going away" as well as 'a new social contract." Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones. Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren't buying all that talk. The title of a fresh paper from Apple says it all: 'The Illusion of Thinking." In it, a half-dozen top researchers probed reasoning models—large language models that 'think" about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim. Generative AI can be quite useful in specific applications, and a boon to worker productivity. OpenAI claims 500 million monthly active ChatGPT users—astonishingly far reach and fast growth for a service released just 2½ years ago. But these critics argue there is a significant hazard in overestimating what it can do, and making business plans, policy decisions and investments based on pronouncements that seem increasingly disconnected from the products themselves. Apple's paper builds on previous work from many of the same engineers, as well as notable research from both academia and other big tech companies, including Salesforce. These experiments show that today's 'reasoning" AIs—hailed as the next step toward autonomous AI agents and, ultimately, superhuman intelligence—are in some cases worse at solving problems than the plain-vanilla AI chatbots that preceded them. This work also shows that whether you're using an AI chatbot or a reasoning model, all systems fail utterly at more complex tasks. Apple's researchers found 'fundamental limitations" in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered 'complete accuracy collapse." Similarly, engineers at Salesforce AI Research concluded that their results 'underscore a significant gap between current LLM capabilities and real-world enterprise demands." Importantly, the problems these state-of-the-art AIs couldn't handle are logic puzzles that even a precocious child could solve, with a little instruction. What's more, when you give these AIs that same kind of instruction, they can't follow it. Apple's paper has set off a debate in tech's halls of power—Signal chats, Substack posts and X threads—pitting AI maximalists against skeptics. 'People could say it's sour grapes, that Apple is just complaining because they don't have a cutting-edge model," says Josh Wolfe, co-founder of venture firm Lux Capital. 'But I don't think it's a criticism so much as an empirical observation." The reasoning methods in OpenAI's models are 'already laying the foundation for agents that can use tools, make decisions, and solve harder problems," says an OpenAI spokesman. 'We're continuing to push those capabilities forward." The debate over this research begins with the implication that today's AIs aren't thinking, but instead are creating a kind of spaghetti of simple rules to follow in every situation covered by their training data. Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple's paper, along with related work, exposes flaws in today's reasoning models, suggesting they're not the dawn of human-level ability but rather a dead end. 'Part of the reason the Apple study landed so strongly is that Apple did it," he says. 'And I think they did it at a moment in time when people have finally started to understand this for themselves." In areas other than coding and mathematics, the latest models aren't getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors. 'The broad idea that reasoning and intelligence come with greater scale of models is probably false," says Jorge Ortiz, an associate professor of engineering at Rutgers, whose lab uses reasoning models and other cutting-edge AI to sense real-world environments. Today's models have inherent limitations that make them bad at following explicit instructions—the opposite of what you'd expect from a computer, he adds. It's as if the industry is creating engines of free association. They're skilled at confabulation, but we're asking them to take on the roles of consistent, rule-following engineers or accountants. That said, even those who are critical of today's AIs hasten to add that the march toward more-capable AI continues. Exposing current limitations could point the way to overcoming them, says Ortiz. For example, new training methods—giving step-by-step feedback on models' performance, adding more resources when they encounter harder problems—could help AI work through bigger problems, and make better use of conventional software. From a business perspective, whether or not current systems can reason, they're going to generate value for users, says Wolfe. 'Models keep getting better, and new approaches to AI are being developed all the time, so I wouldn't be surprised if these limitations are overcome in practice in the near future," says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who has studied the practical uses of AI. Meanwhile, the true believers are undeterred. Just a decade from now, Altman wrote in his essay, 'maybe we will go from solving high-energy physics one year to beginning space colonization the next year." Those willing to 'plug in" to AI with direct, brain-computer interfaces will see their lives profoundly altered, he adds. This kind of rhetoric accelerates AI adoption in every corner of our society. AI is now being used by DOGE to restructure our government, leveraged by militaries to become more lethal, and entrusted with the education of our children, often with unknown consequences. Which means that one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should—even as it's shown itself to have antisocial tendencies such as 'opportunistic blackmail"—and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most. 'Although you can use AI to generate a lot of ideas, they still require quite a bit of auditing," says Ortiz. 'So for example, if you want to do your taxes, you'd want to stick with something more like TurboTax than ChatGPT." Write to Christopher Mims at

iPhone Users Can Finally Know When Their Battery Will Die
iPhone Users Can Finally Know When Their Battery Will Die

News18

time4 hours ago

  • News18

iPhone Users Can Finally Know When Their Battery Will Die

Last Updated: iPhone battery management is getting better thanks to the new features added thanks to the iOS 26 update. The iOS 26 update is bringing a useful feature for iPhones Apple is finally making it easy to know when your iPhone will be charged up to 80 percent and these details will be available on the home screen itself. The iOS 26 update has a lot of new features and the Liquid glass design overhaul but some of the useful tools will be viewed as important as those changes from the company. The new intelligent battery saver mode called Adaptive Power was tipped to be introduced with the iOS 26 update and soon iPhone users will have to enable it on their devices. iPhone users are finally getting a reliable tool from Apple to help them analyse their battery usage and trends. But the new power mode is going one step further by giving you the exact time left for the iPhone to hit the 80 percent. This data will be right next to the charging percentage that you can see on the lock screen while charging the iPhone. As we have mentioned before, there are multiple benefits of having this feature. iPhone users will know how much time they have to keep their device plugged in for charging. And secondly, they will know the charging speed offered by the charger, and replace it for faster results if necessary. So, let's say you are using a slow charger and the iPhone is showing 8 minutes to 80 percent, you can easily look at replacing the adapter to get faster results. First Published: June 13, 2025, 07:30 IST

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store