logo
Apple's AI isn't a letdown. AI is the letdown

Apple's AI isn't a letdown. AI is the letdown

A version of this story appeared in CNN Business' Nightcap newsletter. To get it in your inbox, sign up for free here.
New York
CNN —
Apple has been getting hammered in tech and financial media for its uncharacteristically messy foray into artificial intelligence. After a June event heralding a new AI-powered Siri, the company has delayed its release indefinitely. The AI features Apple has rolled out, including text message summaries, are comically unhelpful.
The critique of Apple's halting rollout is not entirely unfair. Though it is, at times, missing the point.
Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it's the future! What problems is it solving? Well, so far that's not clear! Are customers demanding it? LOL, no. In fact, last year the backlash against one of Apple's early ads for its AI was so hostile the company had to pull the commercial.
The real reason companies are doing this is because Wall Street wants them to. Investors have been salivating for an Apple 'super cycle' — a tech upgrade so enticing that consumers will rush to get their hands on the new model.
In a rush to please shareholders, Apple made a rare stumble. The company is owning its error, it seems, and has said the delayed features would roll out 'in the coming year.'
Of course, the cryptic delay has only given oxygen to the narrative that Apple has become a laggard in the Most Important Tech Advancement in decades.
And that is where the Apple-AI narrative goes off the rails.
AI can only be failed
There's a popular adage in policy circles: 'The party can never fail, it can only be failed.' It is meant as a critique of the ideological gatekeepers who may, for example, blame voters for their party's failings rather than the party itself.
That same fallacy is taking root among AI's biggest backers. AI can never fail, it can only be failed. Failed by you and me, the smooth-brained Luddites who just don't get it. (To be sure, even AI proponents will acknowledge available models' shortcomings — no one would argue that the AI slop clogging Facebook is anything but, well, slop — but there is a dominant narrative within tech that AI is both inevitable and revolutionary.)
Tech columnists such as the New York Times' Kevin Roose have suggested recently that Apple has failed AI, rather than the other way around.
'Apple is not meeting the moment in AI,' Roose said on his podcast, Hard Fork, earlier this month. 'I just think that when you're building products with generative AI built into it, you do just need to be more comfortable with error, with mistakes, with things that are a little rough around the edges.'
To which I would counter, respectfully: Absolutely not.
Roose is right that Apple is, to put it mildly, a fastidious creator of consumer products. It is, after all, the $3-trillion empire built by the notoriously detail-obsessed Steve Jobs.
The Apple brand is perhaps the most meticulously controlled corporate identity on the planet. Its 'walled garden' of iOS — despised by developers and fair game for accusations of monopolistic behavior, to be sure — is also part of the reason one billion people have learned to trust Apple with their sensitive personal data.
Apple's obsession with privacy and security is the reason most of us don't think twice to scan our faces, store bank account information or share our real-time location via our phones.
And not only do we trust Apple to keep our data safe, we trust it to design things that are accessible out of the box. You can buy a new iPhone, AirPods or Apple Watch and trust that the moment you turn it on, a user-friendly system will hold your hand through the setup and seamlessly sync it with your other devices. You will almost never need a user manual filled with tiny print. Even your Boomer parents will be able to navigate FaceTime calls with minimal effort.
Roose contends, at one point in the episode, that 'there are people who use AI systems who know that they are not perfect , ' and that those regular users understand there's a right way and a wrong way to query a chatbot.
This is where we, the people, are apparently failing AI. Because in addition to being humans with jobs and social lives and laundry to fold and art to make and kids to raise, we should also learn how to tiptoe around the limitations of large language models that may or may not return accurate information to us.
Apple, Roose says, should keep pushing AI into its products and just get used to the idea that those features may be unpolished and a little too advanced for the average user.
And again, respectfully, I would ask: To what end?
As Hard Fork co-host Casey Newton notes in the same episode, it's not as if Google or Amazon has figured out some incredible use case that's making users rush to buy a new Pixel phone or an Echo speaker.
'AI is still so much more of a science and research story than it is a product story,' Newton notes.
In other words: Large language models are fascinating science. They are an academic wonder with huge potential and some early commercial successes, such as OpenAI's ChatGPT and Anthropic's Claude. But a bot that's 80 percent accurate — a figure Newton made up, but we'll go with it — isn't a very useful consumer product.
Back in June, Apple floated a compelling scenario for its newfangled Siri. Imagine yourself, frazzled and running late for work, simply saying into your phone: Hey Siri, what time does my mom's flight land? And is it at JFK or LaGuardia? In theory, Siri could scan your email and texts with your mom and give you an answer. That saves you several annoying steps of opening your email to find the flight number, copying it, then pasting it into Google to find the flight's status.
If it's 100 percent accurate, it's a fantastic time saver. If it is anything less than 100 percent accurate, it's useless. Because even if there's a two percent chance it's wrong, there's a two percent chance you're stranding mom at the airport, and mom will be, rightly, very disappointed. Our moms deserve better!
Bottom line: Apple is not the laggard in AI. AI is the laggard in AI.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Too much AI control now can be a risky business - Middle East Business News and Information
Too much AI control now can be a risky business - Middle East Business News and Information

Mid East Info

timea day ago

  • Mid East Info

Too much AI control now can be a risky business - Middle East Business News and Information

Tech entrepreneur Rashit Makhat says the human touch is still vital to reach the best decisions Dubai, UAE, 5 June 2025: As artificial intelligence becomes more embedded in everyday business in the UAE, what are the dangers of giving it too much control? While AI is dramatically speeding up workflows and delivering big efficiencies, experts are warning that relying on it too heavily, too soon, could backfire. Recent insights show that AI works best as a support tool, not a decision-maker. When businesses let machines steer strategy, hiring, or customer service without human oversight, the risks are real. Tech investor and entrepreneur, Rashit Makhat, is among those urging caution on the grounds that, while AI can crunch data and offer options in seconds, it doesn't yet understand people, context, or consequences. 'In today's business environment, the human touch matters more than ever,' says Makhat, Director and co-founder of UAE tech venture company, Scalo Technologies. 'AI will transform business, make companies faster, smarter, and more efficient, and no-one wants to be left behind. 'But while AI can draft entire business plans in moments, unlike humans, it doesn't think. It doesn't feel. It doesn't understand people. If businesses forget that, they could be heading for trouble'. The idea of machines running entire industries is not new. But what are the consequences, when we take AI out of the lab and into the boardroom? Harvard Business Review watched a series of executive team meetings at an $85 million revenue Austrian company. They found that what makes AI a valuable team member is that it helps execs see more options. And it provides information quickly to speed up decisions. 'AI worked best when guided by humans, supporting but not replacing, decision-making,' said Makhat. 'The best ideas came when the AI prompted executives to think differently. When they did, they made better choices. AI wasn't the leader – it was the assistant.' The experiment can't disguise AI's broader impact on corporate decision-making. Some experts predict it could soon run entire departments, or even whole companies. But problems can occur when people treat AI like it's already in charge. In February, three lawyers in a lawsuit against Walmart were fined $5,000 by a U.S. judge in Wyoming for citing fake cases generated by AI. The judge said they had a duty to check their sources were real. Reuters also reported that over the past two years, more U.S. judges have raised concerns or disciplined lawyers for using AI-generated cases and quotes in court. 'The most successful companies aren't those that rush to adopt every new technology,' says Makhat. 'They're the ones that mix innovation with human intelligence. Startups, in particular, must be extra careful. AI should sharpen human judgment, not take its place.' Ends

Google's DeepMind CEO has two worries when it comes to AI. Losing jobs isn't one of them
Google's DeepMind CEO has two worries when it comes to AI. Losing jobs isn't one of them

Egypt Independent

time2 days ago

  • Egypt Independent

Google's DeepMind CEO has two worries when it comes to AI. Losing jobs isn't one of them

CNN — Demis Hassabis, CEO of Google's AI research arm DeepMind and a Nobel Prize laureate, isn't too worried about an AI Demis Hassabis, CEO of Google's AI research arm DeepMind and a Nobel Prize laureate, isn't too worried about an AI 'jobpocalypse.' Instead of fretting over AI replacing jobs, he's worried about the technology falling into the wrong hands – and a lack of guardrails to keep sophisticated, autonomous AI models under control. 'Both of those risks are important, challenging ones,' he said in an interview with CNN's Anna Stewart at the SXSW festival in London, which takes place this week. Last week, the CEO of high-profile AI lab Anthropic had a stark warning about the future of the job landscape, claiming that AI could wipe out half of entry-level white-collar jobs. But Hassabis said he's most concerned about the potential misuse of what AI developers call 'artificial general intelligence,' a theoretical type of AI that would broadly match human-level intelligence. 'A bad actor could repurpose those same technologies for a harmful end,' he said. 'And so one big thing is… how do we restrict access to these systems, powerful systems to bad actors…but enable good actors to do many, many amazing things with it?' Hackers have used AI to generate voice messages impersonating US government officials, the Federal Bureau of Investigation said in a May public advisory. A report commissioned by the US State Department last year found that AI could pose 'catastrophic' national security risks, CNN reported. AI has also facilitated the creation of deepfake pornography — though the Take It Down Act, which President Donald Trump signed into law last month, aims to stop the proliferation of these deepfakes by making it illegal to share nonconsensual explicit images online. Hassabis isn't the first to call out such concerns. But his comments further underscore both the promise of AI and the alarm that it brings as the technology gets better at handling complex tasks like writing code and generating video clips. While AI has been heralded as one of the biggest technological advancements since the internet, it also gives scammers and other malicious actors more tools than ever before. And it's rapidly advancing without much regulation as the United States and China race to establish dominance in the field. Google removed language from its AI ethics policy website in February, pledging not to use AI for weapons and surveillance. Hassabis believes there should be an international agreement on the fundamentals of how AI should be utilized and how to ensure the technology is only used 'for the good use cases.' 'Obviously, it's looking difficult at present day with the geopolitics as it is,' he said. 'But, you know, I hope that as things will improve, and as AI becomes more sophisticated, I think it'll become more clear to the world that that needs to happen.' The DeepMind CEO also believes we're headed toward a future in which people use AI 'agents' to execute tasks on their behalf, a vision Google is working towards by integrating more AI into its search function and developing AI-powered smart glasses. 'We sometimes call it a universal AI assistant that will go around with you everywhere, help you in your everyday life, do mundane admin tasks for you, but also enrich your life by recommending you amazing things, from books and films to maybe even friends to meet,' he said. New AI models are showing progress in areas like video generation and coding, adding to fears that the technology could eliminate jobs. 'AI is starting to get better than humans at almost all intellectual tasks, and we're going to collectively, as a society, grapple with it,' Anthropic CEO Dario Amodei told CNN just after telling Axios that AI could axe entry-level jobs. In April, Meta CEO Mark Zuckerberg said he expects AI to write half the company's code by 2026. However, an AI-focused future is closer to promise than reality. AI is still prone to shortcomings like bias and hallucinations, which have sparked a handful of high-profile mishaps for the companies using the technology. The Chicago Sun-Times and the Philadelphia Inquirer, for example, published an AI-generated summer reading list including nonexistent books last month. While Hassabis says AI will change the workforce, he doesn't believe AI will render jobs obsolete. Like some others in the AI space, he believes the technology could result in new types of jobs and increase productivity. But he also acknowledged that society will likely have to adapt and find some way of 'distributing all the additional productivity that AI will produce in the economy.' He compared AI to the rise of other technological changes, like the internet. 'There's going to be a huge amount of change,' he said. 'Usually what happens is new, even better jobs arrive to take the place of some of the jobs that get replaced. We'll see if that happens this time.'

Arab Fund & Shorooq Partners to Expand SME Credit Access Across MENA
Arab Fund & Shorooq Partners to Expand SME Credit Access Across MENA

CairoScene

time2 days ago

  • CairoScene

Arab Fund & Shorooq Partners to Expand SME Credit Access Across MENA

The new partnership aims to unlock non-dilutive capital for startups through structured debt. Jun 04, 2025 A new strategic investment by the Arab Fund for Economic and Social Development's Badir Fund into Shorooq Partners' Nahda Fund II is set to enhance access to credit for small and medium-sized enterprises (SMEs) and startups across the MENA region. The partnership aims to fill lending gaps by providing flexible, non-dilutive financing options for early-stage companies, especially those unable to meet conventional banking criteria. Nahda Fund II—regulated by the Financial Services Regulatory Authority at Abu Dhabi Global Market—offers structured debt solutions specifically designed for tech-enabled businesses. It supports companies with recurring revenue streams and clear paths to profitability, offering venture debt as an alternative to equity-based fundraising. This model allows startups to scale without giving up ownership, easing capital constraints during their growth stages. By investing in the fund, the Badir Fund is deepening its commitment to entrepreneurship and financial inclusion in the region. Since 2009, it has disbursed over $1.5 billion in loans and $3.5 million in grants to support job creation and private-sector development. The new collaboration will target sectors such as fintech, logistics, e-commerce, and AI-powered services—areas where demand for working capital often outpaces supply from banks. Shorooq has already backed high-growth regional startups through this model, including Pakistan-based fintech Abhi, which raised $15 million to expand its earned wage access offering, and The Box, a UAE self-storage startup that secured $12.5 million to support regional expansion.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store