logo
I Use These 7 Apple Watch Features Every Day to Improve Performance

I Use These 7 Apple Watch Features Every Day to Improve Performance

CNETa day ago

If you're athletic, the Apple Watch can help you boost your body's performance, whether that's staying on top of your pace and heart rate during a run or tracking your sleep cycles. But workouts aren't the only things that benefit from an Apple Watch. It also helps me stay on top of my schedule and is clutch in the kitchen when I need multiple timers. It's these types of little details that make everyday tasks just that much easier.
And that's just the beginning: At Apple's Worldwide Developer Conference the company announced WatchOS 26, which will bring the new Liquid Glass design to the watch as well as the intriguing AI-based Workout Buddy feature. Here's everything you missed at WWDC 2025.
Before WatchOS 26 arrives this fall, however, here are some of my favorite Apple Watch in WatchOS 11.
Swipe between watch faces (again)
Until WatchOS 10.0, you could swipe from the left or right edge of the screen to switch active watch faces, a great way to quickly go from an elegant workday face to an exercise-focused one, for example. Apple removed that feature, likely because people were accidentally switching faces by brushing the edges of the screen.
Swipe from the edge to switch between faces.
Screenshot by Jeff Carlson/CNET
However, the regular method involves more steps (touch and hold the face, swipe to change, tap to confirm) and people realized that the occasional surprise watch face change wasn't really so bad. Therefore, as of version 10.2, including the current WatchOS 11.2, you can turn the feature on by toggling a setting: Go to Settings > Clock and turn on Swipe to Switch Watch Face.
Stay on top of your heart health with Vitals
Wearing your Apple Watch while sleeping offers a trove of information -- and not just about how you slept last night. If you don the timepiece overnight, it tracks a number of health metrics. A new feature in WatchOS 11 gathers that data into the Vitals app that reports on the previous night's heart rate, respiration, body temperature (on recent models) and sleep duration. The Vitals app can also show data collected during the previous seven days -- tap the small calendar icon in the top-left corner.
(If you own a watch model sold before Jan. 29, 2024, you'll also see a blood oxygen reading. On newer watches in the US, that feature is disabled due to an intellectual property infringement fight.)
The Vitals app reports heart and health trends collected while you sleep.
Screenshot by Jeff Carlson/CNET
How is this helpful? The software builds a baseline of what's normal for you. When the values stray outside normal ranges, such as irregular heart or respiratory rates, the Vitals app reports them as atypical to alert you. It's not a medical diagnosis, but it can prompt you to get checked out and catch any troubles early.
Make the Smart Stack work for you
Bring up the Smart Stack using the crown or by swiping.
Screenshot by Jeff Carlson/CNET
The Smart Stack is a place to access quick information that might not fit into what Apple calls a "complication" (the things on the watch face other than the time itself, such as your Activity rings or the current outside temperature). When viewing the clock face, turn the digital crown clockwise or swipe from the bottom of the screen to view a series of tiles that show information such as the weather or suggested photo memories. This turns out to be a great spot for accessing features when you're using a minimal watch face that has no complications.
Choose which Live Activities appear automatically
The Smart Stack is also where Live Activities appear: If you order a food delivery, for example, the status of the order appears as a tile in the Smart Stack (and on the iPhone lock screen). And because it's a timely activity, the Smart Stack becomes the main view instead of the watch face.
Live Activities like mobile orders appear in the Smart Stack.
Screenshot by Jeff Carlson/CNET
Some people find that too intrusive. To disable it, on your watch open the Settings app, go to Smart Stack > Live Activities and turn off the Auto-Launch Live Activities option. You can also turn off Allow Live Activities in the same screen if you don't want them disrupting your watch experience.
Apple's apps that use Live Activities are listed there if you want to configure the setting per app, such as making active timers appear but not media apps such as Music. For third-party apps, open the Watch app on your iPhone, tap Smart Stack and find the settings there.
Add and pin favorite widgets in the Smart Stack
When the Smart Stack first appeared, its usefulness seemed hit or miss. In WatchOS 11, Apple seems to have improved the algorithms that determine which widgets appear -- instead of it being an annoyance, I find it does a good job of showing me information in context. But you can also pin widgets that will show up every time you open the stack.
For example, I use 10-minute timers for a range of things. Instead of opening the Timers app (via the App list or a complication), I added a single 10-minute timer to the Smart Stack. Here's how:
View the Smart Stack by turning the Digital Crown or swiping from the bottom of the screen. Touch and hold the screen to enter the edit mode. Tap the + button and scroll to the app you want to include (Timers, in this example). Tap a tile to add it to the stack; for Timers, there's a Set Timer 10 minutes option. If you want it to appear higher or lower in the stack order, drag it up or down. Tap the checkmark button to accept the change.
Add specific widgets to the Smart Stack.
Screenshot by Jeff Carlson/CNET
The widget appears in the stack, but it may get pushed down in favor of other widgets the watch thinks should have priority. In that case, you can pin it to the top of the list: While editing, tap the yellow Pin button. That moves it up, but Live Activities can still take precedence.
Use the watch as a flashlight
You've probably used the flashlight feature of your phone dozens of times, but did you know the Apple Watch can also be a flashlight? Instead of a dedicated LED (which phones also use as a camera flash), the watch's full screen becomes the light emitter. It's not as bright as the iPhone's, nor can you adjust the beam width, but it's perfectly adequate for moving around in the dark when you don't want to disturb someone sleeping.
To activate the flashlight, press the side button to view Control Center and then tap the Flashlight button. That makes the entire screen white -- turn the Digital Crown to adjust the brightness. It even starts dimmed for a couple of seconds to give you a chance to direct the light away so it doesn't fry your eyes.
Your Apple Watch can double as a hands-free flashlight.
Screenshot by Jeff Carlson/CNET
The flashlight also has two other modes: Swipe left to make the white screen flash on a regular cadence or swipe again to make the screen bright red. The flashing version can be especially helpful when you're walking or running at night to make yourself more visible to vehicles.
Press the Digital Crown to turn off the Flashlight and return to the clock face.
Pause your Exercise rings if you're traveling or ill
Closing your exercise, movement and standing rings can be great motivation for being more active. Sometimes, though, your body has other plans. Until WatchOS 11, if you became ill or needed to be on a long-haul trip, any streak of closing those rings that you built up would be dashed.
Now, the watch is more forgiving (and practical), letting you pause your rings without disrupting the streak. Open the Activity app and tap the Weekly Summary button in the top-left corner. Scroll all the way to the bottom (take a moment to admire your progress) and tap the Pause Rings button. You can choose to pause them for today, until next week or month, or set a custom number of days.
Give yourself a break when needed and pause your exercise rings.
Screenshot by Jeff Carlson/CNET
When you're ready to get back into your activities, go to the same location and tap Resume Rings.
Bypass the countdown to start a workout
Many workouts start with a three-second countdown to prep you to be ready to go. That's fine and all, but usually when I'm doing an Outdoor Walk workout, for example, my feet are already on the move.
Instead of losing those steps, tap the countdown once to bypass it and get right to the calorie burn.
How to force-quit an app (and why you'd want to)
Don't forget, the Apple Watch is a small computer on your wrist, and every computer will have glitches. Every once in a while, for instance, an app may freeze or behave erratically.
On a Mac or iPhone, it's easy to force a recalcitrant app to quit and restart, but it's not as apparent on the Apple Watch. Here's how:
Double-press the Digital Crown to bring up the list of recent apps. Scroll to the one you want to quit by turning the crown or dragging with your finger. Swipe left on the app until you see a large red X button. Tap the X button to force-quit the app.
You can force-quit an app on the Apple Watch.
Screenshot by Jeff Carlson/CNET
Keep in mind this is only for times when an app has actually crashed -- as on the iPhone, there's no benefit to manually quitting apps.
These are some of my favorite Apple Watch tips, but of course there's a lot more to the popular smartwatch. Be sure to also check out which new health features are expected in the next models and Lexy Savvides' review of the Series 10.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Multimodal AI: A Powerful Leap With Complex Trade-Offs
Multimodal AI: A Powerful Leap With Complex Trade-Offs

Forbes

time36 minutes ago

  • Forbes

Multimodal AI: A Powerful Leap With Complex Trade-Offs

Artificial intelligence is evolving into a new phase that more closely resembles human perception and interaction with the world. Multimodal AI enables systems to process and generate information across various formats such as text, images, audio, and video. This advancement promises to revolutionize how businesses operate, innovate, and compete. Unlike earlier AI models, which were limited to a single data type, multimodal models are designed to integrate multiple streams of information, much like humans do. We rarely make decisions based on a single input; we listen, read, observe, and intuit. Now, machines are beginning to emulate this process. Many experts advocate for training models in a multimodal manner rather than focusing on individual media types. This leap in capability offers strategic advantages, such as more intuitive customer interactions, smarter automation, and holistic decision-making. Multimodal has already become a necessity in many simple use cases today. One example of this is the ability to comprehend presentations which have images, text and more. However, responsibility will be critical, as multimodal AI raises new questions about data integration, bias, security, and the true cost of implementation. Multimodal AI allows businesses to unify previously isolated data sources. Imagine a customer support platform that simultaneously processes a transcript, a screenshot, and a tone of voice to resolve an issue. Or consider a factory system that combines visual feeds, sensor data, and technician logs to predict equipment failures before they occur. These are not just efficiency gains; they represent new modes of value creation. In sectors like healthcare, logistics, and retail, multimodal systems can enable more accurate diagnoses, better inventory forecasting, and deeply personalized experiences. In addition, and perhaps more importantly, the ability of AI to engage with us in a multimodal way is the future. Talking to an LLM is easier than writing and then reading through responses. Imagine systems that can engage with us leveraging a combination of voice, videos, and infographics to explain concepts. This will fundamentally change how we engage with the digital ecosystem today and perhaps a big reason why many are starting to think that the AI of tomorrow will need something different than just laptops and screens. This is why leading tech firms like Google, Meta, Apple, and Microsoft are heavily investing in building native multimodal models rather than piecing together unimodal components. Despite its potential, implementing multimodal AI is complex. One of the biggest challenges is data integration, which involves more than just technical plumbing. Organizations need to feed integrated data flows into models, which is not an easy task. Consider a large organization with a wealth of enterprise data: documents, meetings, images, chats, and code. Is this information connected in a way that enables multimodal reasoning? Or think about a manufacturing plant: how can visual inspections, temperature sensors, and work orders be meaningfully fused in real time? That's not to mention the computing power multimodal AI require, which Sam Altman referenced in a viral tweet earlier this year. But success requires more than engineering; it requires clarity about which data combinations unlock real business outcomes. Without this clarity, integration efforts risk becoming costly experiments with unclear returns on investment. Multimodal systems can also amplify biases inherent in each data type. Visual datasets, such as those used in computer vision, may not equally represent all demographic groups. For example, a dataset might contain more images of people from certain ethnicities, age groups, or genders, leading to a skewed representation. Asking a LLM to generate an image of a person drawing with their left hand remains challenging – leading hypothesis is that most pictures available to train are right-handed individuals. Language data, such as text from books, articles, social media, and other sources, is created by humans who are influenced by their own social and cultural backgrounds. As a result, the language used can reflect the biases, stereotypes, and norms prevalent in those societies. When these inputs interact, the effects can compound unpredictably. A system trained on images from a narrow population may behave differently when paired with demographic metadata intended to broaden its utility. The result could be a system that appears more intelligent but is actually more brittle or biased. Business leaders must evolve their auditing and governance of AI systems to account for cross-modal risks, not just isolated flaws in training data. Additionally, multimodal systems raise the stakes for data security and privacy. Combining more data types creates a more specific and personal profile. Text alone may reveal what someone said, audio adds how they said it, and visuals show who they are. Adding biometric or behavioral data creates a detailed, persistent fingerprint. This has significant implications for customer trust, regulatory exposure, and cybersecurity strategy. Multimodal systems must be designed for resilience and accountability from the ground up, not just performance. Multimodal AI is not just a technical innovation; it represents a strategic shift that aligns artificial intelligence more closely with human cognition and real business contexts. It offers powerful new capabilities but demands a higher standard of data integration, fairness, and security. For executives, the key question is not just, "Can we build this?" but "Should we, and how?" What use case justifies the complexity? What risks are compounded when data types converge? How will success be measured, not just in performance but in trust? The promise is real, but like any frontier, it demands responsible exploration.

Shiba Inu Price Sell-Off Continues as SHIB Burn Rate Skyrockets to 112,000%
Shiba Inu Price Sell-Off Continues as SHIB Burn Rate Skyrockets to 112,000%

Yahoo

timean hour ago

  • Yahoo

Shiba Inu Price Sell-Off Continues as SHIB Burn Rate Skyrockets to 112,000%

Shiba inu's (SHIB) supply-side dynamics are screaming bullish, yet the second-largest joke cryptocurrency by market value trades under pressure. Early this week, SHIB's burn rate surged to over 112,000%, with more than 116 million coins transferred to wallets that cannot spend money. In other words, these coins were permanently taken out of circulation. The daily burn rate refers to the number of SHIB tokens permanently destroyed or removed from circulation each day. Token burns are designed to decrease the supply of the cryptocurrency over time, bringing a deflationary appeal to the digital asset. "Over 527 trillion SHIB tokens are approaching profitability, while the burn rate exploded 112,839% with 116 million tokens removed from circulation," CoinDesk's AI insights noted. Furthermore, SHIB's ecosystem fundamentals demonstrated strength, with record wallet growth exceeding 1.5 million unique addresses and significant increases in Shibarium layer-2 transactions. Still, the memecoin remained locked in a downtrend at press time, last changing hands at $0.00001190, representing a 2% drop over the past 24 hours and a nearly 5% decline for the week. Overnight, the token faced strong selling pressure, with above-average volume exceeding 500 billion units, establishing resistance around $0.0000122. The double-bottom pattern is forming on charts, signalling a potential 20% rally to $0.000016. Key resistance has been established at $0.0000122, backed by above-average volumes. The narrow trading range ($0.00001203-$0.000012) indicates the consolidation phase. Volume spikes at 07:35 and 07:46-07:47 coincided with price recovery attempts. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

The Browser Company launches its AI-first browser, Dia, in beta
The Browser Company launches its AI-first browser, Dia, in beta

Yahoo

timean hour ago

  • Yahoo

The Browser Company launches its AI-first browser, Dia, in beta

Traditional web tools are facing an existential crisis as AI products and tools increasingly eat up attention — and therefore market share and money — from a wide swathe of products that people have used for years to interact with the internet. At least, that's what The Browser Company seems to think is happening. The company last year decided to stop developing its popular web browser Arc, acknowledging that while Arc was popular among enthusiasts, it never hit scale as it presented too steep a learning curve to reach mass adoption. The startup has since been heads-down on developing a browser that bakes in AI at the heart of the browser. That browser, called Dia, is now available for use in beta, though you'll need an invite to try it out. The Browser Company's CEO Josh Miller has of late acknowledged how people have been using AI tools for all sorts of tasks, and Dia is a reflection of that. By giving users an AI interface within the browser itself, where a majority of work is done these days, the company is hoping to slide into the user flow and give people an easy way to use AI, cutting out the need to visit the sites for tools like ChatGPT, Perplexity and Claude. Up front, Dia presents a straightforward interface. The browser is based on Chromium, the open-source browser project backed by Google, so it has a familiar look and feel. The marquee feature here is the AI smarts, of course. Besides letting you type in website names and search terms, Dia's URL bar acts as the interface for its in-built AI chatbot. The bot can search the web for you, summarize files that you upload, and can automatically switch between chat and search functions. Users can also ask questions about all the tabs they have open, and the bot can even write up a draft based on the contents of those tabs. To set your preferences, all you have to do is talk to the chatbot to customize its tone of voice, style of writing, and settings for coding. Via an opt-in feature called History, you can allow the browser to use seven days of your browsing history as context to answer queries. Another feature called Skills lets you build small snippets of code that act as shortcuts to various settings. For example, you can ask the browser to build a layout for reading, and it'll code something up for you — think Siri shortcuts, but for your browser. Now, we have to note that chatbots in browsers are not a new feature at all. Several browser companies have integrated AI tools into their interfaces — for example, Opera Neon lets users use an AI agent to build mini-applications or complete tasks on their behalf, and Google is also adding AI-powered features to Chrome. The Browser Company says all existing Arc members will get access to Dia immediately, and existing Dia users will be able to send invites to other users. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store