
Apple's WWDC is here. Investors are still waiting on that AI promise.
Apple's Worldwide Developers Conference kicks off Monday with muted expectations from Wall Street—and the risk of last year's unfulfilled AI promises overshadowing the conference.
In 2024, the company put out its answer to 'what's Apple going to do with AI?" The announcements included a range of new features under the banner of 'Apple Intelligence," but the rollout was delayed.
'Expectations are rightfully tempered this year at WWDC relative to prior years where we had more material announcements," wrote Evercore ISI's Amit Daryanani in a note to clients last week.
Like many, Apple was taken by surprise by the November 2022 launch of OpenAI's ChatGPT, the match that lit the current AI boom. Apple had long been successfully using AI throughout its products for specific features like making photos, videos and audio higher quality, but had failed to make Siri a useful conversational assistant, like the new generation of chatbots are providing.
At WWDC 2024, Apple announced some new AI features that were becoming common elsewhere, such as text and image generation, and notification summaries. But the big new advance would be a redesigned Siri—a real assistant that could work across installed apps and the web, creating automated workflows from simple conversational commands.
But the rollout was delayed last fall and on arrival it omitted the new Siri, for which Apple users are still waiting. Existing features have been underwhelming and were met with poor reviews.
'Apple's made this promise, this huge thing is coming that's going to change everything across their whole lineup," said prominent tech reviewer Marques Brownlee in November, as features were rolling out. 'They've been talking about it for a while, and I think that promise is starting to fade."
In a March statement to Apple blogger John Gruber, an Apple spokesperson admitted that the new Siri is 'going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year."
On its May earnings call, Apple CEO Tim Cook made similar remarks.
Ben Reitzes of Melius Research said that management can't be pleased as it gears up for the conference, 'but we have no doubt they are making adjustments to the strategy behind the scenes."
'However, it is unlikely we'll hear the company outline precise aspects of its upcoming product plans until September," Reitzes said.
Even if it is racing from behind, no one should count Apple out yet. Its biggest strength is its installed base of over two billion devices. Anything it puts out will instantly be in front of a huge number of people. But Apple needs to do better than it has to date to get those users engaged, and that has resulted in a leadership shake-up for this new version of Siri.
A large part of the problem is self-inflicted by other Apple concerns. Privacy and security are at the heart of everything Apple does, and it would prefer that neither Apple or anyone else have access to your conversations with Siri. The best way to do this is for everything to happen on the device, not in the cloud somewhere, and this is how most of Apple's legacy AI features work.
The first layer of Apple Intelligence is on-device AI, but this is hampered by the limited computing and memory capabilities of consumer devices, and results are mixed. Apple has also built its own cloud, called Private Cloud Compute, and its first order of business is that no one can see user chats. Apple wants to control the hardware and software so it can guarantee private and secure chats, but this puts it at a disadvantage to the Nvidia technology that so many competitors use.
That is why, at least for now, Apple is forced to fall back on the third layer of Apple Intelligence, ChatGPT, where Apple cannot guarantee user privacy.
Security and privacy may turn out to be Apple's biggest hurdles in creating the Siri that its users would like to have, and that Apple promised them a year ago.
Apple wasn't the first to make a PC, smartphone, tablet or smartwatch, but it eventually made category-defining products for all of them. Making Apple Intelligence a leader will be more difficult than any of those.
Write to Adam Levine at adam.levine@barrons.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
an hour ago
- Hindustan Times
Who is Sadie Novotny and why is the Sonoma County woman suing Costco for $14 million?
A woman from Santa Rosa, California, is suing Costco for more than $14 million, saying a heavy display cabinet fell on her while she was shopping, leaving her with serious injuries. Sadie Novotny filed the lawsuit on April 29, claiming the store was careless in how it managed its merchandise. She says this led to the accident that caused her permanent injuries. The case was first filed in California Superior Court in Alameda County, but was moved to the US District Court for Northern California on June 5, after Costco's legal team requested the change. Also Read: Apple agrees to $95 million Siri privacy settlement: How to claim your payment According to the lawsuit, Novotny was shopping at a Costco in Santa Rosa on March 22 when a large liquor cabinet fell on her 'without warning.' The suit says she suffered 'multiple, permanent and catastrophic' injuries, including a head injury and a traumatic brain injury. A Costco incident report included in the complaint shows that Novotny and her husband were trying to buy a liquor cabinet when the floor model suddenly tipped over. She said she managed to catch the cabinet and push it back, but right away felt pain in her 'right shoulder, forearm, hand, fingers and lower back,' according to the report. Novotny is suing for general negligence, but also for premises liability and products liability. In legal terms, premises liability means the store could be held responsible for injuries that happen on their property. Products liability means a defective or unsafe item may have caused harm. In May, attorney Nick Rineberg, who represents both Costco Wholesale Corporation ('Costco') and Costco Wholesale Membership, Inc. ('Membership'), filed a request asking the court to remove Membership from the lawsuit. He said, 'Costco and Membership are separate entities, Membership does not operate the subject premises, Membership does not employ any persons at the subject premises, and Membership does not own, lease, operate, or manage the subject premises.' The case is now being handled in federal court.


Time of India
3 hours ago
- Time of India
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.


Time of India
4 hours ago
- Time of India
Despite their rivalry, Steve Jobs defined his bond with Bill Gates using this Beatles lyric, and it might just melt your heart
In the pantheon of modern tech giants, few rivalries have captivated the world like that of Bill Gates and Steve Jobs . As titans of Microsoft and Apple , they reshaped the world—often in fierce competition with one another. But beneath the decades of dueling products and trading barbs lay a deeply human story, punctuated by a moment of raw, heartfelt truth that has resurfaced—and is melting hearts all over again. From Sworn Rivals to Soft-Spoken Admirers In the early days of personal computing , Gates and Jobs were not just competitors—they were at times adversaries in every sense of the word. They accused each other of stealing ideas. They made public digs. Their relationship symbolized the intense battle for tech supremacy in the 1980s and '90s. But everything began to shift in 1997 when Microsoft bailed out the then-struggling Apple with a $150 million investment. That moment laid the groundwork for a subtle transformation in their dynamic—from cold warfare to quiet admiration. A Stage, a Song, and a Sentence That Said It All That transformation reached its most poignant moment on stage at the 2007 AllThingsD conference , where the two visionaries shared space, laughter, and respect. Gates even admitted, 'I'd give a lot to have Steve's taste.' But it was Jobs who delivered the line that no one has been able to forget—a single sentence that redefined their relationship in the eyes of the world. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Crossout: New Apocalyptic MMO Crossout Play Now Undo 'I think of most things in life as either a Bob Dylan or a Beatles song,' Jobs began, 'but there's that one line in that one Beatles song, 'you and I have memories longer than the road that stretches out ahead' ... and that's clearly true here.' Taken from the Beatles' song Two of Us —a tune many believe chronicles the evolving bond between Lennon and McCartney—Jobs' quote laid bare a relationship shaped as much by mutual history as by rivalry. It was as if he was saying: even if our paths have been turbulent, they've been shared—and that matters more than what lies ahead. You Might Also Like: Steve Jobs' forgotten life lesson resurfaces: 'The world is built by people no smarter than you' More Than a Moment—A Goodbye in Disguise? For many, the line hits harder with hindsight. Unknown to most at the time of the interview, Jobs had recently learned his pancreatic cancer had returned. Only his wife, a few doctors, and a handful of close confidantes reportedly knew. The interview, held in May 2007, came just four years before Jobs passed away at age 56. In retrospect, the quote feels less like nostalgia and more like a quiet farewell. What began as the greatest rivalry in tech ended in something more nuanced: mutual recognition, layered respect, and yes, love—in the complicated, real-world sense. In the end, it wasn't just computers they built. It was history, together. And like the Beatles lyric that captured their bond, theirs was a story 'longer than the road that stretches out ahead.' You Might Also Like: Not techies! Steve Jobs' and Bill Gates' daughters both chose life partners from same profession