
iOS 26 will make your old iPhone feel faster with this nifty trick
On iOS 26 beta 6, apps launch faster and with a wrap effect, which is similar to the way windows minimize to the dock on macOS. Part of the change is that apps leap out of their icons much faster than before, which adds to the feeling of speed. Unlike iOS 18, where the animation is more gradual, on iOS 26 , apps expand to 90% of the screen in the time it takes the older software to fill about half the display.
Sorry, your browser doesn't support embedded videos.
That's just a cosmetic change, but it makes the iOS 26 beta 6 feel faster than before, and it'll most likely make older iPhones feel snappier as well. That's quite important because iOS 26 will be pre-installed on the iPhone 17, but it'll also be available as a software update for devices as old as the iPhone 11, which was released in 2019.Providing a feeling of faster software is a smart move for Apple. The Liquid Glass redesign is already a controversial topic, so it's likely that many negative opinions will emerge once it reaches more iPhone users. If Apple makes people feel like their phones got faster, they'll be happier, even if they're not delighted by the new design. Just like many other things Apple does, that's not exactly unique. Most Android launchers out there already have faster animations, which look especially good on high refresh rate displays.
As an owner of an older iPhone, I am quite excited about Apple's approach with iOS 26 . That won't be the first time my device feels faster after a major OS update, and I've always been happy about it. Even if it's just a trick, I'll happily enjoy the perception and may even postpone buying a new phone a little longer.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Phone Arena
an hour ago
- Phone Arena
Google Photos might get a new way to show if images were made or edited with AI
The discovery was made in version 7.41 of the app, where code references a feature called "threepio." It would add a new "How was this made" section to the photo details view. It would involve swiping up on a photo or video, where users could see details about how it was created or edited. The labels it might include are: "Media created with AI" "Edited with AI tools" "Edited with multiple AI tools" "Edited with non-AI tools" "Media captured with a camera without software adjustments" It may also detect when multiple editing tools were used or when several images were combined. Additionally, if the file's edit history is missing or has been changed, Google Photos would show an error message instead. According to the source, this functionality appears to be powered by Content Credentials, a system that attaches a persistent history of edits to a file. That information stays with the media even when shared, unless it's removed. The idea isn't entirely new for Google. The company has already developed SynthID, a DeepMind project that invisibly watermarks AI-generated images. While it's unclear if SynthID is being used here, both approaches aim to give people more context about the origins of visual content. Examples of realistic images created via Imagen 3, a Google image generation tool. | Image credit — Google Other companies have been working on similar solutions. Adobe's Content Authenticity Initiative tracks edits in image metadata, while Meta has committed to labeling AI-generated images across Facebook and Instagram. Together, these projects show that the tech industry sees transparency around AI edits as increasingly important. If Google releases this feature with Google Photos, it could be a useful tool for quickly checking whether photos and videos are authentic. While the irony is not lost on me that this is the same company that developed tools to generate very realistic AI images and videos, I can see how having this could be valuable in areas like journalism, education, and online sales, where trust matters. Additionally, including such a tool in one of the most widely used photo apps could set an example that others follow.


Phone Arena
an hour ago
- Phone Arena
Solos' new partnership brings real-world use cases that you need to already excellent smart glasses starting today
Solos — which makes some pretty neat smart glasses to challenge the Meta Ray-Ban — has just partnered with Envision. Through this collaboration, the app 'Ally' is now supported on the Solos AirGo, bringing practical use to the glasses for people whose vision is on the Solos AirGo will enable its users to be able to carry out a lot of tasks with AI-powered assistance backing them up. For example, the glasses can read text from books, mail, menus, and signs for the user. The AirGo will also be able to recognize people and places, so users can simply ask their glasses where they currently are, or who might be approaching them. Furthermore, the Solos AirGo — powered by Ally — can remember context for future conversations, or read you a document. And, of course, the glasses can also answer questions, provide reminders, or carry out other tasks that current AI models are quite good Meta, Google, Samsung, and Apple vie for an emerging industry, it's nice to see a company also keeping other helpful applications of the technology in mind. The Meta Ray-Ban smart glasses were an unprecedented success, and glasses like the Solos AirGo are a bridge between the smartphone of today and the true AR smart glasses of tomorrow. Using Ally on the Solos AirGo smart glasses. | Image credit — Solos Envision's Ally app's compatibility with the Solos AirGo smart glasses begins today, and all tiers of the glasses will get a month-long free trial of Ally Pro. If you find the app helpful with your glasses, then you know that you've just found what will possibly be your best purchase in years. You can order a pair of the Solos AirGo smart glasses here. Solos' smart glasses offer plenty of reasons for consumers to choose them over competitors. For example, shortly after bringing AI-powered features to the glasses, Solos introduced an extra mode of privacy. The Solos AirGo Vision, launched last year, allowed users to swap frames for whenever they didn't want the camera to be able to see their surroundings. Whenever the industry finally moves on to smart glasses with displays, I suspect Solos will follow with some pretty nifty offerings of its own.


Phone Arena
2 hours ago
- Phone Arena
New safety feature in Google Messages rolls out which detects and blurs explicit images
The feature works by processing and classifying images entirely on-device through Android System SafetyCore. According to Google, no identifiable data or the classified content itself is sent to the company's servers, and users must be signed in to their Google Account in Messages for the tool to function. When a blurred image is detected, you can choose to: Learn why nude images can be harmful Block the sender's number View the image after confirming your choice Return to the conversation without opening it The system also issues a prompt when users attempt to send or forward a nude image. Additionally, users will be reminded of the risks and must confirm before the message goes through. For adults (18+), the feature is turned off by default but can be enabled via Google Messages Settings > Protection & Safety > Manage sensitive content warnings > Warnings in Google Messages. The rules differ for younger users: supervised accounts cannot turn it off without parental control via Family Link, while unsupervised teens aged 13–17 can disable it in their Google Account settings. Settings to manage sensitive content warnings. | Images credit — 9to5Google Apple introduced a similar system called Communication Safety in iMessage, which blurs sexually explicit images for children's accounts and provides safety resources. Like Google's approach, Apple's detection also happens on-device, aiming to protect privacy while adding an extra layer of safety. However, Apple's version is primarily aimed at minors, whereas Google's covers both adult and teen users, with different default settings based on age. On one hand, Google's Sensitive Content Warnings could help reduce harmful or unwanted exposure, especially for younger users. Having the detection happen on-device with no image data sent to servers should also help ease privacy concerns. On the other hand, some users may find the prompts intrusive, particularly in adult conversations where consent is already established. The fact that adults must enable the feature manually might also limit its adoption. That said, this rollout targets a real problem that needs a solution — particularly when it comes to minor. If having to tweak your settings or put up with some annoying prompts are the price to pay, it's up to each individual to decide if it's worth it.