
I Wanted to Switch to Pixel 10, but This iOS 26 Feature Might Keep Me on iPhone
I switched from a Google Pixel 3 XL to Apple's iPhone 12 Pro Max in 2021, and in the years since, I've dearly missed Google's Hold For Me. This feature is so useful that I'm shocked it hasn't been brought to the iPhone or even other Android phones in the years since. Hold For Me saves me from the misery of listening to awful hold music whenever I needed to call up a business, my health insurance provider, my cellphone carrier or any of the other myriad adulting tasks that still require speaking with a representative. Instead, the Google feature would helpfully silence my phone while keeping the call active, listen to the hold music for me and then ring when it's time to return to the call while alerting the representative that I'll return shortly.
And so at the Worldwide Developers Conference 2025, when Apple announced Hold Assist -- which sounds awfully similar to Pixel's Hold For Me -- I was thrilled. I've been eying a switch back to Android for the rumored Pixel 10, partly because I've missed having these call controls for everyday issues. But with iOS 26, Hold Assist should detect hold music, and then give you the option to silence the call while keeping it active. Then, when the representative comes back on, the phone will notify you when it's time to return to the call. We'll have to wait until at least the public beta to start trying this feature out, but on paper, it sounds almost exactly like the Pixel feature.
The Hold For Me feature debuted in 2020 with the Pixel 5 and 4A.
Google/Screenshot by Sara Tew/CNET
While I'm glad that the iPhone will finally have an equivalent to this feature, it's worth pointing out that it's taken a long time for such calling enhancements to make their way outside of Google's Pixel line. Google introduced Hold For Me in 2020, but most other Android phones made by Samsung, OnePlus and others do not include their own take on the idea.
The new Call Screening feature for iOS 26 is similar to the Pixel's Call Screen option, but it sounds like Apple's rendition will take a more automated approach. Apple's Call Screen will collect information like the person's name and purpose from an unknown caller for you, and then present it as a summary to help you decide if you should pick up. You can also send more prompts as needed if you're still unsure.
In iOS 26, Hold Assist will keep the call remain active but phone will be silenced.
Apple/Screenshot
Google's solution lets you pick the questions that are asked to the caller and, instead of a summary, you watch as a text transcription of the call takes place.
What I appreciate most about these features is that they remember that the iPhone is a phone at the end of the day. And spam callers remain just as much of a problem now as ever, especially as AI voice clones add even more issues to the kinds of scams trying to reach people.
Until these features are available when iOS 26 arrives later this year, I will just continue to bring my patience to the next time I have to call up my health insurance provider. And keeping my fingers crossed that hold music can become a thing of the past when Hold Assist becomes widely available.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
24 minutes ago
- Forbes
How To Build Scalable, Reliable And Effective Internal Tech Systems
In many businesses, platform engineers serve two sets of customers: external clients and internal colleagues. When building tools for internal use, following the same user-centered design principles applied to customer-facing products isn't just good practice—it's a proven way to boost team efficiency, accelerate development and improve overall user satisfaction. Below, members of Forbes Technology Council share key design principles platform engineers should keep front and center whether they're building for clients or colleagues. From prioritizing real team needs to planning ahead for worst-case scenarios, these strategies can ensure internal systems are scalable, reliable and truly supportive of the teams they're built for. 1. Minimize User Friction The one core design principle platform engineers should keep front and center when building internal tools is minimizing user friction by streamlining the journey and improving cycle time. Additionally, internal tools should include clear feedback mechanisms to help users quickly identify and resolve issues, along with just-in-time guidance to support user education as needed. - Naman Raval 2. Build With External Use In Mind You should always consider the possibility that an internal tool may eventually end up being an external tool. With that in mind, you should try not to couple core logic to internal user information. - David Van Ronk, Bridgehead IT Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? 3. Design With Empathy It's important to design with empathy. Internal tools should prioritize user experience for the engineers and teams who rely on them. Simple, intuitive interfaces and seamless workflows reduce friction, enhance productivity and encourage adoption—making the tool not just functional, but loved. - Luis Peralta, Parallel Plus, Inc. 4. Focus On Simplicity Ease of use and intuitive design must be front and center when building internal tools. Features that are overly nested or require significant learning time directly impact productivity. This inefficiency can be quantified in terms of human hours multiplied by the number of resources affected, potentially leading to substantial revenue loss, especially for larger organizations. - Hari Sonnenahalli, NTT Data Business Solutions 5. Adopt Domain-Driven Design And A 'Streaming Data First' Approach Platform engineers should prioritize domain-driven design to explore, access and share data seamlessly. As cloud diversification and real-time data pipelines become essential, embracing a 'streaming data first' approach is key. This shift enhances automation, reduces complexity and enables rapid, AI-driven insights across business domains. - Guillaume Aymé, 6. Build Scalable Tools With A Self-Service Model A self-service-based scaled service operating model is critical for the success of an internal tool. Often, engineers take internal stakeholders for granted, not realizing they are their customers—customers whose broader use of an internal tool will make or break their product. Alongside scalable design, it will be equally important to have an organizational change management strategy in place. - Abhi Shimpi 7. Prioritize Cognitive Leverage Platform engineers should prioritize cognitive leverage over just reducing cognitive load. Internal tools should simplify tasks, amplify engineers' thinking and accelerate decision-making by surfacing context, patterns and smart defaults. - Manav Kapoor, Amazon 8. Empower Developers With Low-Dependency Tools The platform engineering team should strive to minimize dependencies on themselves when designing any solutions. It's crucial to empower the development team to use these tools independently and efficiently. - Prasad Banala, Dollar General Corporation 9. Lead With API-Driven Development Platform engineers should prioritize API-driven development over jumping straight into UI when building internal tools. Starting with workflows and backend design helps map data, avoid duplicated requests and reduce long-term tech debt. Though slower up front, this approach creates scalable, reliable tools aligned with actual business processes, not just quick fixes for internal use. - Jae Lee, MBLM 10. Observe Real Workflows Platform engineers should design for the actual job to be done, not just stated feature requests. They should observe how teams work and build tools that streamline those critical paths. The best internal tools solve real workflow bottlenecks, not just surface-level asks from teammates. - Alessa Cross, Ventrilo AI 11. Favor Speed, Flexibility And Usability You have to design like you're building a food truck, not a fine-dining kitchen—fast, flexible and usable by anyone on the move. Internal tools should favor speed over ceremony, with intuitive defaults and minimal setup. If your engineers need a manual just to order fries (or deploy code), you've overdesigned the menu. - Joel Frenette, 12. Ensure Tools Are Clear, Simple And Well-Explained When building internal tools, platform engineers should focus on making them easy and smooth for developers to use. If tools are simple, clear and well-explained, developers can do their work faster and without confusion. This saves time, reduces mistakes and helps the whole team work better. - Jay Krishnan, NAIB IT Consultancy Solutions WLL 13. Embrace User-Centric Design Platform engineers should prioritize user-centric design. They must focus on the needs, workflows and pain points of internal users to create intuitive, efficient tools. This principle ensures adoption, reduces training time and boosts productivity, as tools align with real-world use cases, minimizing friction and maximizing value for developers and teams. - Lori Schafer, Digital Wave Technology 14. Prioritize Developer Experience Internal platforms must prioritize developer experience above all. The best tools feel invisible—engineers use them without friction because interfaces are intuitive, documentation is clear and workflows are streamlined. When developers spend more time fighting your platform than building with it, you've failed your mission. - Anuj Tyagi 15. Bake In Observability Platform engineers should treat internal tools as evolving ecosystems, not static products. A core design principle is observability by default—bake in usage analytics, error tracking and feedback hooks from day one. This ensures tools organically improve over time and are grounded in real-world behavior, not assumptions, creating systems that adapt as teams and needs evolve. - Pawan Anand, Ascendion 16. Leverage Progressive Abstraction Progressive abstraction lets internal platforms scale with developer maturity. Engineers can start with guided, low-friction 'golden paths' for beginners while enabling power users to customize, script or access APIs. This balance avoids tool sprawl, supports growth and keeps platforms inclusive, adaptive and relevant over time. - Anusha Nerella, State Street Corporation 17. Streamline Processes Through Predictable, Intuitive Interfaces Internal tools must streamline processes instead of creating additional obstacles. Focus on clear, intuitive interfaces; fast onboarding with minimal documentation; and solid default settings that include advanced options for experienced users. Build in observability and self-service support, and strive for consistent, predictable behavior. - Saket Chaudhari, TriNet Inc. 18. Design Easy Authentication And Authorization Systems There should be ease of authentication and authorization. When building internal tools, you shouldn't design in silos. You must consider how many clicks it takes for an analyst, mid-call with a client, to launch what they need for troubleshooting. Seamless access, least privilege and contextual authentication aren't just security features—they're reflections of good architecture and thoughtful design. - Santosh Ratna Deepika Addagalla, Trizetto Provider Solutions 19. Engineer For High-Stress, Critical Scenarios A word of advice is to engineer for the worst day, not the average day. Internal tools become critical lifelines during incidents, yet we often design them for sunny-weather scenarios. When a system is melting down at 3 a.m. and the on-call engineer is bleary-eyed, that's when your tool's UX truly matters. Simple interfaces with clear error messages become worth their weight in gold. - Ishaan Agarwal, Square 20. Ensure Users Don't Need Deep Platform Knowledge Design for self-service and extension. Internal tools should empower teams to solve problems without deep platform knowledge. Engineers should hide complexity behind sensible defaults and include clean abstractions that allow extensions and clear documentation. Platforms succeed when others can build confidently without needing to ask for help every time. - Abhishek Shivanna, Nubank


Forbes
25 minutes ago
- Forbes
AI Reliability At Scale: Lessons From Enterprise Deployment
Alan Nichol is Cofounder & CTO of Rasa, a conversational AI framework that has been downloaded over 50 million times. Enterprises have no trouble proving that generative AI can make an impressive demo. The challenge is proving that it works consistently. A proof of concept might show off a powerful language model completing tasks or holding fluid conversations, but turn that same system loose in production, and cracks start to show: missed steps, unpredictable behavior, escalating costs. We've seen how many promising initiatives stall once they move beyond controlled environments. An assistant performs well in testing but struggles when faced with complex, real-world conversations. The root issue often isn't the model but the lack of structure. Too many systems rely on prompt engineering and trial-and-error, expecting the model to behave like a deterministic system. It won't. Enterprises need architecture that delivers clarity, control and repeatability. That starts by separating understanding from execution and designing assistants that operate with purpose and stability. This article outlines the most common failure points, the architectural patterns that work and what enterprises need to prioritize when building AI that meets real business standards. The Failure Mode: When Reliability Breaks Generative AI fails quietly at first. A missed step, a misinterpreted update, an unexpected fallback—these issues appear small in isolation. But when they surface repeatedly across thousands of interactions, the system begins to break. Conversations derail, users lose trust and costs spiral. We've seen assistants who can't hold context across turns, jump to the wrong conclusion or reset the conversation when the user simply tries to clarify something. These are not fringe cases. They're direct results of systems built entirely around prompting and agentic reasoning. When a language model is asked to carry the entire load (understanding, planning, executing), errors are inevitable. Without a clear separation between conversational logic and business execution, assistants behave inconsistently (handling the same request one moment and failing the next). Latency compounds as models attempt multiple reasoning steps per turn. Structured automation made the biggest difference. In Rasa's testing, assistants consistently followed business logic, while less structured approaches failed in over 80% of cases. These systems also delivered up to 4x faster response times without sacrificing consistency. Cost overruns follow the same pattern. Fully agentic architectures burn through tokens with each decision loop. When per-message costs were measured, systems using prompt-based execution consumed over twice the resources of structured approaches. These aren't isolated technical glitches. They're symptoms of an architecture with guardrails that don't function as intended. Klarna's shift back to human support highlights what happens when AI is deployed without sufficient structure or oversight. Building for reliability means designing assistants to handle real-world inputs, not just scripted demos. That starts by giving structure to how assistants interpret, decide and act. The Structural Fix: Rethinking AI Architecture Real reliability starts with designing structural systems. When assistants behave inconsistently, it's usually because their architecture often treats the model as both the interpreter and the executor. This entanglement makes it hard to control behavior, diagnose issues or scale confidently. Reliable systems assign clear roles: The language model handles interpretation, while deterministic components manage execution. Instead of letting an LLM guess the next action, these systems convert user input into executable commands that flow through structured, deterministic logic. That shift creates a predictable environment where behavior always aligns with business logic. Design LLMs to understand user input in context, mapping it to commands rather than guessing what to do next. Those commands move through structured flows that handle decision points, API calls and edge cases. This clear separation reduces variance, simplifies debugging and ensures the assistant doesn't veer off track in the middle of a task. Lessons From the Field: What Reliable AI Looks Like Across enterprise deployments, structured automation improved consistency by replacing fragile dialogue trees and intent classifications with command generation. This approach interprets user input as a sequence of actions, allowing assistants to handle nuanced phrasing and edge cases without collapsing the flow. Conversation repair takes this further by allowing assistants to respond smoothly to interruptions, corrections or topic changes. Combined with contextual rephrasing, responses remain fluid while staying tethered to the original business logic. These improvements don't require massive models or high-latency pipelines. Teams using smaller, fine-tuned models (i.e., Llama 8B, Gemma) have reduced costs and latency without sacrificing quality. Deployments that once depended on proprietary APIs now run effectively with open-source models and inference endpoints, offering performance and control. Balancing Performance And Cost At Scale Token-based pricing punishes high usage, even when a model generates near-identical responses. In a production environment, that adds up fast. A high-volume assistant handling repeated help requests doesn't need a high-powered model to phrase 'Let me assist you with that' ten thousand different ways. Rasa reduced those costs by deploying smaller, open-source models fine-tuned for specific roles. By offloading tasks like contextual rephrasing to a lightweight LLM, teams preserved conversational quality while cutting unnecessary overhead. These optimizations led to a 77.8% reduction in operational costs across real-world implementations. In rephrasing tasks, smaller models delivered natural, varied output with response quality that closely matched larger systems. Fine-tuning allowed developers to tune temperature settings, reduce hallucination risk and improve latency. Voice interfaces, in particular, demand low-latency systems. A delay of even a second breaks the illusion of real-time conversation. In head-to-head testing, fine-tuned small models outperformed GPT-4 in latency-sensitive deployments, delivering faster, more consistent user experiences at a dramatically lower cost. Conclusion Enterprise AI must deliver with consistency, accuracy and speed. That happens when systems are designed with clear boundaries between understanding and execution. Structured automation brings order to complexity, giving teams the tools to iterate faster, reduce costs and control what their assistant says and does. This approach doesn't slow progress. It can give teams the foundation to move from proof of concept to production confidently. When reliability is built in from the start, AI becomes easier to maintain, simpler to scale, and strong enough to support critical business operations. As with any AI model, it's important to deploy strategically and at a measured pace—and constantly monitor how it is performing. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Bloomberg
27 minutes ago
- Bloomberg
Trump's Chip Tariff Threat Sparks Pushback From Auto Industry to Tech
Blowback to President Donald Trump's idea of tariffs on imported semiconductors is proving to be broad and deep, stretching from auto companies and boat makers to the technology industry and crypto enthusiasts, according to a review of more than 150 public comments on the proposal. The possible levy of up to 25% has united rivals like Tesla Inc., General Motors Co. and Ford Motor Co. in voicing reservations. It's brought together industry lobbies from the Crypto Council for Innovation to the National Marine Manufacturers Association. Even Taiwan and the People's Republic of China are finding common cause, along with predictable parts of the tech sector including chipmakers and wireless providers.