logo
Apple's AI isn't a letdown. AI is the letdown

Apple's AI isn't a letdown. AI is the letdown

CNN27-03-2025

Apple has been getting hammered in tech and financial media for its uncharacteristically messy foray into artificial intelligence. After a June event heralding a new AI-powered Siri, the company has delayed its release indefinitely. The AI features Apple has rolled out, including text message summaries, are comically unhelpful.
The critique of Apple's halting rollout is not entirely unfair. Though it is, at times, missing the point.
Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it's the future! What problems is it solving? Well, so far that's not clear! Are customers demanding it? LOL, no. In fact, last year the backlash against one of Apple's early ads for its AI was so hostile the company had to pull the commercial.
The real reason companies are doing this is because Wall Street wants them to. Investors have been salivating for an Apple 'super cycle' — a tech upgrade so enticing that consumers will rush to get their hands on the new model.
In a rush to please shareholders, Apple made a rare stumble. The company is owning its error, it seems, and has said the delayed features would roll out 'in the coming year.'
Of course, the cryptic delay has only given oxygen to the narrative that Apple has become a laggard in the Most Important Tech Advancement in decades.
And that is where the Apple-AI narrative goes off the rails.
There's a popular adage in policy circles: 'The party can never fail, it can only be failed.' It is meant as a critique of the ideological gatekeepers who may, for example, blame voters for their party's failings rather than the party itself.
That same fallacy is taking root among AI's biggest backers. AI can never fail, it can only be failed. Failed by you and me, the smooth-brained Luddites who just don't get it. (To be sure, even AI proponents will acknowledge available models' shortcomings — no one would argue that the AI slop clogging Facebook is anything but, well, slop — but there is a dominant narrative within tech that AI is both inevitable and revolutionary.)
Tech columnists such as the New York Times' Kevin Roose have suggested recently that Apple has failed AI, rather than the other way around.
'Apple is not meeting the moment in AI,' Roose said on his podcast, Hard Fork, earlier this month. 'I just think that when you're building products with generative AI built into it, you do just need to be more comfortable with error, with mistakes, with things that are a little rough around the edges.'
To which I would counter, respectfully: Absolutely not.
Roose is right that Apple is, to put it mildly, a fastidious creator of consumer products. It is, after all, the $3-trillion empire built by the notoriously detail-obsessed Steve Jobs.
The Apple brand is perhaps the most meticulously controlled corporate identity on the planet. Its 'walled garden' of iOS — despised by developers and fair game for accusations of monopolistic behavior, to be sure — is also part of the reason one billion people have learned to trust Apple with their sensitive personal data.
Apple's obsession with privacy and security is the reason most of us don't think twice to scan our faces, store bank account information or share our real-time location via our phones.
And not only do we trust Apple to keep our data safe, we trust it to design things that are accessible out of the box. You can buy a new iPhone, AirPods or Apple Watch and trust that the moment you turn it on, a user-friendly system will hold your hand through the setup and seamlessly sync it with your other devices. You will almost never need a user manual filled with tiny print. Even your Boomer parents will be able to navigate FaceTime calls with minimal effort.
Roose contends, at one point in the episode, that 'there are people who use AI systems who know that they are not perfect,' and that those regular users understand there's a right way and a wrong way to query a chatbot.
This is where we, the people, are apparently failing AI. Because in addition to being humans with jobs and social lives and laundry to fold and art to make and kids to raise, we should also learn how to tiptoe around the limitations of large language models that may or may not return accurate information to us.
Apple, Roose says, should keep pushing AI into its products and just get used to the idea that those features may be unpolished and a little too advanced for the average user.
And again, respectfully, I would ask: To what end?
As Hard Fork co-host Casey Newton notes in the same episode, it's not as if Google or Amazon has figured out some incredible use case that's making users rush to buy a new Pixel phone or an Echo speaker.
'AI is still so much more of a science and research story than it is a product story,' Newton notes.
In other words: Large language models are fascinating science. They are an academic wonder with huge potential and some early commercial successes, such as OpenAI's ChatGPT and Anthropic's Claude. But a bot that's 80% accurate — a figure Newton made up, but we'll go with it — isn't a very useful consumer product.
Back in June, Apple floated a compelling scenario for its newfangled Siri. Imagine yourself, frazzled and running late for work, simply saying into your phone: Hey Siri, what time does my mom's flight land? And is it at JFK or LaGuardia? In theory, Siri could scan your email and texts with your mom and give you an answer. That saves you several annoying steps of opening your email to find the flight number, copying it, then pasting it into Google to find the flight's status.
If it's 100% accurate, it's a fantastic time saver. If it is anything less than 100% accurate, it's useless. Because even if there's a 2% chance it's wrong, there's a 2% chance you're stranding mom at the airport, and mom will be, rightly, very disappointed. Our moms deserve better!
Bottom line: Apple is not the laggard in AI. AI is the laggard in AI.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta platforms showed hundreds of "nudify" deepfake ads, CBS News finds
Meta platforms showed hundreds of "nudify" deepfake ads, CBS News finds

Yahoo

time19 minutes ago

  • Yahoo

Meta platforms showed hundreds of "nudify" deepfake ads, CBS News finds

Meta has removed a number of ads promoting "nudify" apps — AI tools used to create sexually explicit deepfakes using images of real people — after a CBS News investigation found hundreds of such advertisements on its platforms. "We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps," a Meta spokesperson told CBS News in an emailed statement. CBS News uncovered dozens of those ads on Meta's Instagram platform, in its "Stories" feature, promoting AI tools that, in many cases, advertised the ability to "upload a photo" and "see anyone naked." Other ads in Instagram's Stories promoted the ability to upload and manipulate videos of real people. One promotional ad even read "how is this filter even allowed?" as text underneath an example of a nude deepfake. One ad promoted its AI product by using highly sexualized, underwear-clad deepfake images of actors Scarlett Johansson and Anne Hathaway. Some of the ads ads' URL links redirect to websites that promote the ability to animate real people's images and get them to perform sex acts. And some of the applications charged users between $20 and $80 to access these "exclusive" and "advance" features. In other cases, an ad's URL redirected users to Apple's app store, where "nudify" apps were available to download. An analysis of the advertisements in Meta's ad library found that there were, at a minimum, hundreds of these ads available across the company's social media platforms, including on Facebook, Instagram, Threads, the Facebook Messenger application and Meta Audience Network — a platform that allows Meta advertisers to reach users on mobile apps and websites that partner with the company. According to Meta's own Ad Library data, many of these ads were specifically targeted at men between the ages of 18 and 65, and were active in the United States, European Union and United Kingdom. A Meta spokesperson told CBS News the spread of this sort of AI-generated content is an ongoing problem and they are facing increasingly sophisticated challenges in trying to combat it. "The people behind these exploitative apps constantly evolve their tactics to evade detection, so we're continuously working to strengthen our enforcement," a Meta spokesperson said. CBS News found that ads for "nudify" deepfake tools were still available on the company's Instagram platform even after Meta had removed those initially flagged. Deepfakes are manipulated images, audio recordings, or videos of real people that have been altered with artificial intelligence to misrepresent someone as saying or doing something that the person did not actually say or do. Last month, President Trump signed into law the bipartisan "Take It Down Act," which, among other things, requires websites and social media companies to remove deepfake content within 48 hours of notice from a victim. Although the law makes it illegal to "knowingly publish" or threaten to publish intimate images without a person's consent, including AI-created deepfakes, it does not target the tools used to create such AI-generated content. Those tools do violate platform safety and moderation rules implemented by both Apple and Meta on their respective platforms. Meta's advertising standards policy says, "ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive." Under Meta's "bullying and harassment" policy, the company also prohibits "derogatory sexualized photoshop or drawings" on its platforms. The company says its regulations are intended to block users from sharing or threatening to share nonconsensual intimate imagery. Apple's guidelines for its app store explicitly state that "content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy" is banned. Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University's tech research center, has been studying the surge in AI deepfake networks marketing on social platforms for more than a year. He told CBS News in a phone interview on Tuesday that he'd seen thousands more of these ads across Meta platforms, as well as on platforms such as X and Telegram, during that period. Although Telegram and X have what he described as a structural "lawlessness" that allows for this sort of content, he believes Meta's leadership lacks the will to address the issue, despite having content moderators in place. "I do think that trust and safety teams at these companies care. I don't think, frankly, that they care at the very top of the company in Meta's case," he said. "They're clearly under-resourcing the teams that have to fight this stuff, because as sophisticated as these [deepfake] networks are … they don't have Meta money to throw at it." Mantzarlis also said that he found in his research that "nudify" deepfake generators are available to download on both Apple's app store and Google's Play store, expressing frustration with these massive platforms' inability to enforce such content. "The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don't necessarily have the wherewithal to ban them," he said. "There needs to be cross-industry cooperation where if the app or the website markets itself as a tool for nudification on any place on the web, then everyone else can be like, 'All right, I don't care what you present yourself as on my platform, you're gone,'" Mantzarlis added. CBS News has reached out to both Apple and Google for comment as to how they moderate their respective platforms. Neither company had responded by the time of writing. Major tech companies' promotion of such apps raises serious questions about both user consent and about online safety for minors. A CBS News analysis of one "nudify" website promoted on Instagram showed that the site did not prompt any form of age verification prior to a user uploading a photo to generate a deepfake image. Such issues are widespread. In December, CBS News' 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people. Despite visitors being told that they must be 18 or older to use the site, and that "processing of minors is impossible," 60 Minutes was able to immediately gain access to uploading photos once the user clicked "accept" on the age warning prompt, with no other age verification necessary. Data also shows that a high percentage of underage teenagers have interacted with deepfake content. A March 2025 study conducted by the children's protection nonprofit Thorn showed that among teens, 41% said they had heard of the term "deepfake nudes," while 10% reported personally knowing someone who had had deepfake nude imagery created of them. Musk alleges Trump's name appeared in Epstein files as feud escalates What to know about President Trump's travel ban on nationals from 12 countries Trump says he's disappointed by Musk criticism of budget bill, Musk says he got Trump elected

What Could Future Banking Look Like If AI Takes Over?
What Could Future Banking Look Like If AI Takes Over?

Forbes

time24 minutes ago

  • Forbes

What Could Future Banking Look Like If AI Takes Over?

Alex Kreger, UX Strategist & Founder of the financial UX design agency UXDA, designs leading banking and fintech products in 39 countries. getty The imminent integration of AI into daily routines promises to dramatically reshape our lives over the next five years, propelled by advancements akin to ChatGPT, Gemini, Grok, etc. This shift is driven by the recognition that human capacities, while remarkable, cannot match the vast research and creative and analytical potential of artificial intelligence (AI). As we project into the future, it becomes clear that AI will also redefine digital banking experiences and grant individuals with financial capabilities that were once unimaginable. As a design strategist developing financial services for leading banks and fintech providers in 39 countries, I'm curious to envision how AI will overhaul the typical banking experience for everyday consumers. Although the complete adoption of AI across the financial sector has yet to unfold, it is crucial to anticipate its eventual impact. The question is no longer 'What if?' but rather 'How?'—and how best to brace ourselves for the changes that lie ahead. Banks already maintain enormous stores of customer data, but unlocking its true power demands cutting-edge technology. AI may well be the solution that helps institutions tackle customer demands with speed and accuracy. By channeling this data effectively, banks can provide individualized products at precisely the right time—an endeavor impossible for standard processes alone. The current data stockpile is merely a starting point. As digital tools evolve, financial institutions will gather much more data from smartphones, social networks, public service APIs, open banking APIs and IoT devices through 5G. This explosion of information calls for a robust, near-superhuman capacity to sift through the noise and pinpoint what truly matters—something AI might deliver within the next decade. In the coming years, the most significant AI-driven breakthroughs are likely to include: • Personalized Offers: Data-rich approach makes customized proposals more precise and simultaneously mitigates risks by matching the ideal product to the ideal customer. • Investment: By eliminating human biases, AI could evaluate a multitude of market and business variables to foresee investment success. • Security: AI could expedite verification by reducing the constant need for identity confirmations. • Financial Advisory: With the aid of big data and personal profiling, AI could illuminate each client's needs, generating in-depth forecasts and healthier financial practices. • Support: AI-powered bots could offer prompt, tailored solutions, greatly enhancing customer service. • Alternative Processing: AI-powered voice, gestures, neurotechnology, VR and AR interfaces will enable banking transactions beyond conventional channels. With Statista expecting generative-AI spend in banking to rocket to $85 billion by 2030, it's time for leaders to start by putting AI into their strategic plan—not just the tech roadmap. Hire a senior executive (Chief AI) who owns value creation and AI risks and spin up a cross-functional 'AI initiatives' that groups stakeholders, data scientists and product designers that move to an API-first, event-streaming service architecture so models can surface predictions (e.g., 'potential cash shortfall Friday') in real time. Early adopters are showing where the value sits, and leaders should take note. J.P. Morgan's Quest IndexGPT can generate investable indices; Morgan Stanley's Debrief can summarize adviser meetings; NatWest's Cora+ can handle nuanced customer queries. At the same time, Wall Street majors—from Goldman to Citi—are scaling internal LLM-powered co-pilots for drafting IPO documents, surfacing research or searching policies. Customer-facing assistants are already setting the bar. Bank of America's Erica has served 20 million active users, Wells Fargo's Fargo went from 21 million interactions in 2023 to 245 million in 2024 by using a privacy-first pipeline that strips PII before any LLM call. On the insight side, RBC's NOMI Forecast crunches account data to predict the next seven days' cash flow; more than 900,000 clients have generated 10 million interactions since its late-2021 launch. Generative models excel at turning trillions of events into the next best micro-experience. Commonwealth Bank of Australia's Customer Engagement Engine, for example, ingests 3.1 trillion data points and runs 2,000 real-time models, lifting loyalty with recommendations so much that mobile users now log in 67 times a month on average. The key is to couple a real-time feature store with small language models that handle intent, then let a larger model draft the personalized nudge or insight. Start with one or two journeys where better prediction or conversation will be felt within weeks—fraud alerts or an SME cash-flow coach. Ship, measure, retrain and fold the learning into a reusable component library so subsequent squads stand on the shoulders of the first. The biggest headwind is regulation: Europe's AI Act is already in force and will classify credit-scoring, KYC, trading and robo-advice models as 'high risk' by August 2026. Finding talent and culture is also an ongoing challenge. Banks are hiring aggressively, yet even Deutsche Bank admits the scarcity of seasoned AI professionals and the difficulty of embedding them in legacy teams. Third, security and trust: four in five bank leaders say they fear AI-enabled cyberattacks, and front-office chatbots can still hallucinate or breach privacy if left unsupervised. Mitigate by adopting zero-trust data-access patterns, embedding red-teaming into MLOps, and running 'constitutional' or retrieval-augmented QA layers that force a model to cite source documents. Initially, AI's role is to automate foundational tasks. Over time, however, I expect that it will evolve to deliver comprehensive solutions across all industries, including finance. After two decades of digital self-service in finance, AI can restore the conversation—context-aware, always on, and scaled to every customer. AI's full potential is truly immeasurable, and its effects on banking customer experience—and countless other sectors—will be transformative. By merging technological advancements with thoughtful user experience design, forward-looking companies can build a future where AI not only empowers individuals but also redefines entire industries. The era of AI-driven finance is fast approaching, and now is the time to prepare for its far-reaching influence. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Learning By Sharing: How GenAI Can Be The Giving Tree
Learning By Sharing: How GenAI Can Be The Giving Tree

Forbes

time25 minutes ago

  • Forbes

Learning By Sharing: How GenAI Can Be The Giving Tree

Nitin Rakesh is the CEO and Managing Director of Mphasis and coauthor of the award-winning book 'Transformation in Times of Crisis.' Our world is witnessing a wave of advancements, from the emergence of automation to the implementation of artificial intelligence. For senior leaders, this acceleration presents opportunities and challenges. While striving to adopt cutting-edge technologies, they are also attempting to ensure these align with business objectives and ethical standards. One of the most transformative yet polarizing of these technologies is generative AI (GenAI). While it promises new avenues for creativity, problem-solving and efficiency, it also raises apprehensions about its impact on typical decision making processes. In this sense, GenAI remains a bit of a paradox for business leaders, as it is both a source of curiosity and a cause for concern. While executives see its immense potential as a transformative tool capable of boosting creativity, streamlining operations and generating efficiency gains across functions, there is an underlying wariness about how it can disrupt established workflows. In the face of GenAI's dual nature, executives are navigating a critical challenge of how to integrate this powerful technology without losing the distinctiveness of human intuition. Rather than framing GenAI as a disruptive force to be managed, a more productive perspective may be to see it as a collaborator—an enabler of shared growth and continuous learning. When viewed this way, I believe GenAI can become more of a partner in learning and development with whom leaders can foster a mutually beneficial relationship mirroring the spirit of mutual giving. However, embracing this potential demands a shift in mindset. It invites leaders to move beyond passive adoption and toward active stewardship of the human-AI dynamic. This reframing introduces important questions: Are leaders thoughtfully measuring GenAI's contributions, not just in efficiency but in depth and relevance? Are they engaging with it in ways that reflect their own values and expertise—training it, shaping it and learning from it in return? And perhaps most importantly, is their ongoing interaction with GenAI expanding their capacity for insight and growth? AI is not an autonomous force—it is a reflection of the data, intentions and perspectives we bring to it. Its development is inseparable from human input. The real opportunity lies not just in what AI can do for us, but in how the human-AI relationship can evolve into one of reciprocal enrichment. Consider an iconic children's literature classic: The Giving Tree by Shel Silverstein. In the story, the tree selflessly offers its apples, branches and eventually its entire trunk to the boy. This tale invites us to reflect on the nature of giving, taking and the balance of relationships. In much the same way, GenAI can be seen as a tireless provider—offering its computational power, adaptability and insights to organizations. Yet unlike the tree, GenAI does not give from a place of emotion or altruism. It responds to the quality of its inputs, the clarity of its training and the intentionality of its use. This analogy prompts a critical shift in how leaders approach AI. While it's tempting to focus solely on what GenAI can do for us—automating tasks, generating insights, fueling innovation—the more profound question is: What are we giving back? How are we shaping, stewarding and engaging with AI to ensure it grows in a direction aligned with human values and long-term impact? If organizations treat AI merely as an extractive tool, they risk building unsustainable dependencies. But if leaders approach the technology as a partner in co-evolution—offering guidance, expertise and ethical oversight—then GenAI becomes more than a resource. It becomes a trusted collaborator, capable of growing alongside the organization. Beyond GenAI's adoption, leaders must know its value, inspiring their teams to integrate AI into daily workflows. Evangelizing AI within the executive team is crucial to ensuring collective alignment, fostering a culture where AI is viewed as an augmentation of human expertise rather than a replacement. This requires a deep understanding of what GenAI can do, allowing leaders to define where it can have the most meaningful impact. For GenAI to truly benefit organizations, leaders must anchor its application in clear, measurable business objectives. Identifying the right problems for GenAI to solve and aligning its deployment with strategic goals ensures its use remains practical and value-driven. Governance is equally critical, ensuring that AI-driven decisions uphold ethical standards and comply with regulatory frameworks. Data quality, infrastructure readiness and an AI-savvy workforce further determine how effectively GenAI can be leveraged. The organizations that invest in these foundational aspects will be best positioned to turn AI into a long-term growth partner like the enduring relationship between the boy and the Giving Tree. When leaders take ownership and adopt an interactive approach, they create an environment of collaboration. Talented individuals are also drawn to such leaders—not just for their effectiveness but for their generosity and team-focused mindset. It is crucial that leaders actively coach GenAI so it evolves in tandem with organizational needs. By continuously refining and adapting its algorithms, leaders can make GenAI more iterative, refined, intelligent and thoughtful. Much like mentoring human employees, providing AI with feedback allows it to improve over time. For instance, organizations have enhanced customer service chatbots by training them on real user interactions, enabling them to respond with greater empathy and accuracy. These iterations ensure that AI evolves to handle more complex queries, improving both customer satisfaction and operational efficiency. With each successive interaction, AI becomes better suited to the evolving needs of an organization. This illustrates the timeless relevance of Moore's Law, which predicts that computing power will double every two years. Just as computing capacity expands exponentially, so too can GenAI's ability to solve complex problems as long as it is actively coached. As AI models become more advanced, the need for a growing network of skilled, human trainers becomes essential for their continued improvement. The constant feedback and refinement will fuel GenAI's exponential development—enhancing its capacity to transform businesses. The key to thriving in this new era of AI is learning how to strike the right balance with thoughtful strategy. As organizations continue to work closely with GenAI, partnerships will evolve, leading to exciting outcomes. What will be crucial to keep in mind is that this exchange between human and machine fosters sustainable progress built on collaboration and just practice. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store