logo
Apple smart glasses just tipped for 2027 launch — and there's 6 other 'head-mounted' wearable devices on the way

Apple smart glasses just tipped for 2027 launch — and there's 6 other 'head-mounted' wearable devices on the way

Tom's Guide7 days ago
We've heard a bunch of rumors about Apple's plans for smart glasses, and other wearable head gear, but those plans may be grander than we initially thought. According to analyst Ming-Chi Kuo, Apple has at least 7 different head-bound devices in development right now.
According to the analyst's report, "Apple views head-mounted devices as the next major trend in consumer electronics." Which may explain why it's putting so much work into developing different options.
Apparently this includes three Vision series products and four different kinds of smart glasses. None of these are expected to arrive in 2026, and the first releases are planned to arrive from 2027. Kuo claims that 5 of these products have "confirmed development timelines," while the final 2 are still TBD.
The first new product set to arrive is apparently the Ray-Ban-like smart glasses, with projected shipments of 3-5 million units in 2027. However, he believes an M5-powered Vision Pro, which is otherwise unchanged from the current model, is set to go into mass production in Q3 of this year.
Kuo claims that these glasses will have no display functionality — instead relying on audio playback, cameras, video recording and an AI that can analyze the world around you. There will also apparently be voice and gesture controls to go along with it.
Kuo's report predicts that a lightweight "Vision Air" will enter mass production in Q3 2027, with a 40% reduction in weight, lower price tag and "Apple's latest flagship iPhone processor."
Meanwhile, the 2nd generation Vision Pro is expected to arrive in late 2028, with a new lighter design, a lower price and a "Mac-grade processor."
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
In terms of glasses, late 2028 is also when Kuo predicts a set of XR glasses will arrive, complete with a liquid crystal on silicon (LCoS) color display — plus voice and gesture controls. AI support is said to be critical to its success.
Some kind of tethered display accessory is also said to have been in development, but has been paused since last year. Apparently, this was delayed because it doesn't have what Apple needs to better compete with rival products.
It's unclear what the final seventh device actually is, and Kuo doesn't go into details about what might be involved.
Between Meta, Google, Samsung and upstarts like Xreal, Apple really has its work cut our for it. The competition is heating up, and right now 2027 seems like a very long way away.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is your Pixel having lock screen issues after Android 16 update? You're not alone.
Is your Pixel having lock screen issues after Android 16 update? You're not alone.

Android Authority

time35 minutes ago

  • Android Authority

Is your Pixel having lock screen issues after Android 16 update? You're not alone.

Joe Maring / Android Authority TL;DR Some Google Pixel owners are reporting lock screen issues after updating to Android 16. Problems include tap-to-wake not working, the screen not turning on after hitting the power button, and more. Affected users are advised to restart their phones to mitigate the issue, but Google is apparently working on a fix. Google Pixel phones were the first to receive an update to the stable Android 16 software. This update brings features like an Advanced Protection Mode, anti-scammer measures during phone calls, and more. Unfortunately, it looks like the update has also brought lock screen woes to some users. Some people on the Pixel Phone Help forum and Reddit (h/t: 9to5Google) have reported various lock screen issues on their Pixels after the Android 16 update. These problems include tap-to-wake not working, the screen not turning on (or being slow to turn on) after hitting the power button, extreme variations in brightness, and fingerprint scanner problems. A few people also report a slow UI and app crashes when they are eventually able to unlock their phone screen. Is your Pixel phone having lock screen issues after the Android 16 update? 0 votes Yes NaN % No NaN % Affected users have tried several workarounds to mitigate this problem. One prominent workaround is to restart the phone, but this only seems to work for a few hours before the lock screen issues pop up again. A product expert on Google's forum also suggested that users reset their adaptive brightness as a workaround for the brightness issue on the lock screen. This can be done via Settings > Apps > All apps > Device Health Services > Storage and cache > Clear storage > Reset adaptive brightness. Unfortunately, these problems seem to have persisted for almost a month now. Thankfully, the product expert confirmed that Google is aware of these lock screen problems and is working on a fix. There's no word on a release timeline just yet, but we hope it's sooner rather than later. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

AI Shopping Is Here. Will Retailers Get Left Behind?
AI Shopping Is Here. Will Retailers Get Left Behind?

Business of Fashion

time2 hours ago

  • Business of Fashion

AI Shopping Is Here. Will Retailers Get Left Behind?

AI doesn't care about your beautiful website. Visit any fashion brand's homepage and you'll see all sorts of dynamic or interactive elements from image carousels to dropdown menus that are designed to catch shoppers' eyes and ease navigation. To the large language models that underlie ChatGPT and other generative AI, many of these features might as well not exist. They're often written in the programming language JavaScript, which for the moment at least most AI struggles to read. This giant blindspot didn't matter when generative AI was mostly used to write emails and cheat on homework. But a growing number of startups and tech giants are deploying this technology to help users shop — or even make the purchase themselves. ADVERTISEMENT 'A lot of your site might actually be invisible to an LLM from the jump,' said A.J. Ghergich, global vice president of Botify, an AI optimisation company that helps brands from Christian Louboutin to Levi's make sure their products are visible to and shoppable by AI. The vast majority of visitors to brands' websites are still human, but that's changing fast. US retailers saw a 1,200 percent jump in visits from generative AI sources between July 2024 and February 2025, according to Adobe Analytics. Salesforce predicts AI platforms and AI agents will drive $260 billion in global online sales this holiday season. Those agents, launched by AI players such as OpenAI and Perplexity, are capable of performing tasks on their own, including navigating to a retailer's site, adding an item to cart and completing the checkout process on behalf of a shopper. Google's recently introduced agent will automatically buy a product when it drops to a price the user sets. This form of shopping is very much in its infancy; the AI shopping agents available still tend to be clumsy. Long term, however, many technologists envision a future where much of the activity online is driven by AI, whether that's consumers discovering products or agents completing transactions. To prepare, businesses from retail behemoth Walmart to luxury fashion labels are reconsidering everything from how they design their websites to how they handle payments and advertise online as they try to catch the eye of AI and not just humans. 'It's in every single conversation I'm having right now,' said Caila Schwartz, director of consumer insights and strategy at Salesforce, which powers the e-commerce of a number of retailers, during a roundtable for press in June. 'It is what everyone wants to talk about, and everyone's trying to figure out and ask [about] and understand and build for.' From SEO to GEO and AEO As AI joins humans in shopping online, businesses are pivoting from SEO — search engine optimisation, or ensuring products show up at the top of a Google query — to generative engine optimisation (GEO) or answer engine optimisation (AEO), where catching the attention of an AI responding to a user's request is the goal. That's easier said than done, particularly since it's not always clear even to the AI companies themselves how their tools rank products, as Perplexity's chief executive, Aravind Srinivas, admitted to Fortune last year. AI platforms ingest vast amounts of data from across the internet to produce their results. ADVERTISEMENT Though there are indications of what attracts their notice. Products with rich, well-structured content attached tend to have an advantage, as do those that are the frequent subject of conversation and reviews online. 'Brands might want to invest more in developing robust customer-review programmes and using influencer marketing — even at the micro-influencer level — to generate more content and discussion that will then be picked up by the LLMs,' said Sky Canaves, a principal analyst at Emarketer focusing on fashion, beauty and luxury. Ghergich pointed out that brands should be diligent with their product feeds into programmes such as Google's Merchant Center, where retailers upload product data to ensure their items appear in Google's search and shopping results. These types of feeds are full of structured data including product names and descriptions meant to be picked up by machines so they can direct shoppers to the right items. One example from Google reads: Stride & Conquer: Original Google Men's Blue & Orange Power Shoes (Size 8). Ghergich said AI will often read this data before other sources such as the HTML on a brand's website. These feeds can also be vital for making sure the AI is pulling pricing data that's up to date, or as close as possible. As more consumers turn to AI and agents, however, it could change the very nature of online marketing, a scenario that would shake even Google's advertising empire. Tactics that work on humans, like promoted posts with flashy visuals, could be ineffective for catching AI's notice. It would force a redistribution of how retailers spend their ad budgets. Emarketer forecasts that spending on traditional search ads in the US will see slower growth in the years ahead, while a larger share of ad budgets will go towards AI search. OpenAI, whose CEO, Sam Altman, has voiced his distaste for ads in the past, has also acknowledged exploring ads on its platform as it looks for new revenue streams. 'The big challenge for brands with advertising is then how to show up in front of consumers when traditional ad formats are being circumvented by AI agents, when consumers are not looking at advertisements because agents are playing a bigger role,' said Canaves. Bots Are Good Now Retailers face another set of issues if consumers start turning to agents to handle purchases. On the one hand, agents could be great for reducing the friction that often causes consumers to abandon their carts. Rather than going through the checkout process themselves and stumbling over any annoyances, they just tell the agent to do it and off it goes. ADVERTISEMENT But most websites aren't designed for bots to make purchases — exactly the opposite, in fact. Bad actors have historically used bots to snatch up products from sneakers to concert tickets before other shoppers can buy them, frequently to flip them for a profit. For many retailers, they're a nuisance. 'A lot of time and effort has been spent to keep machines out,' said Rubail Birwadker, senior vice president and global head of growth at Visa. If a site has reason to believe a bot is behind a transaction — say it completes forms too fast — it could block it. The retailer doesn't make the sale, and the customer is left with a frustrating experience. Payment players are working to create methods that will allow verified agents to check out on behalf of a consumer without compromising security. In April, Visa launched a programme focused on enabling AI-driven shopping called Intelligent Commerce. It uses a mix of credential verification (similar to setting up Apple Pay) and biometrics to ensure shoppers are able to checkout while preventing opportunities for fraud. 'We are going out and working with these providers to say, 'Hey, we would like to … make it easy for you to know what's a good, white-list bot versus a non-whitelist bot,'' Birwadker said. Of course the bot has to make it to checkout. AI agents can stumble over other common elements in webpages, like login fields. It may be some time before all those issues are resolved and they can seamlessly complete any purchase. Consumers have to get on board as well. So far, few appear to be rushing to use agents for their shopping, though that could change. In March, Salesforce published the results of a global survey that polled different age groups on their interest in various use cases for AI agents. Interest in using agents to buy products rose with each subsequent generation, with 63 percent of Gen-Z respondents saying they were interested. Canaves of Emarketer pointed out that younger generations are already using AI regularly for school and work. Shopping with AI may not be their first impulse, but because the behaviour is already ingrained in their daily lives in other ways, it's spilling over into how they find and buy products. More consumers are starting their shopping journeys on AI platforms, too, and Schwartz of Salesforce noted that over time this could shape their expectations of the internet more broadly, the way Google and Amazon did. 'It just feels inevitable that we are going to see a much more consistent amount of commerce transactions originate and, ultimately, natively happen on these AI agentic platforms,' said Birwadker.

I tested the AI transcription tools for iPhone vs Samsung Galaxy vs Google Pixel — here's the winner
I tested the AI transcription tools for iPhone vs Samsung Galaxy vs Google Pixel — here's the winner

Tom's Guide

time2 hours ago

  • Tom's Guide

I tested the AI transcription tools for iPhone vs Samsung Galaxy vs Google Pixel — here's the winner

This article is part of our AI Phone Face-Off. If you're interested in our other comparisons, check out the links below. Long before AI was a buzzword included in every handset's marketing material, a few lucky phones already offered automatic transcripts of voice recordings. But the arrival of on-device AI has extended that feature to more phones and more apps, including the Phone app itself, while also adding auto-generated summary features to the mix. All three of the major smartphone makers — Apple, Google and Samsung — offer some type of voice recording app on their flagship phones with real-time transcription as part of the feature set. Those phones now record and transcribe phone calls, too. And summary tools that tap into AI to produce recaps of conversations, articles, recordings and more have become commonly available on iPhones, Pixels and Galaxy S devices alike. But which phone offers the most complete set of transcription and summarization tools? To find out, I took an iPhone 15 Pro, Pixel 9 and Galaxy S25 Plus loaded with the latest available version of their respective operating systems, and put each device through a series of tests. If you need a phone that can turn your speech into text or cut through a lengthy recording to bring you the highlights, here's which phone is most up to the job. I wrote out a scripted phone call, handed one copy to my wife and then scurried outside to call her three separate times from the iPhone, Pixel and Galaxy S device. By scripting out our conversation, we could see which on-board AI provided a more accurate transcript. And after each call, I took a look at the AI-generated summary to see if it accurately followed our discussion of rental properties in the San Francisco Bay Area. The iPhone's transcript was the most muddled of the three, with more instances of incorrect words and a lack of proper punctuation. The biggest misstep, though, was mixed up words that my wife and I had said, as if we had been talking over each other. (We had not.) Because I was calling someone in my Contacts, though, the iPhone did helpfully add names to each speaker — a nice touch. The transcripts from the Pixel 9 and Galaxy S25 Plus were equally accurate when compared to each other. Samsung displays its transcripts as if you're looking at a chat, with different text bubbles representing each speaker. Google's approach is to label the conversation with 'you' and 'the speaker.' I prefer the look of Google's transcript, though I appreciate that when my wife and I talked expenses, Galaxy AI successfully put that in dollar amounts. Google's Gemini just used numbers without dollar designations. As for the summaries, the one provided by iPhone accurately summed up the information I requested from my wife. The Galaxy AI summary was accurate, too, but left out the budget amount, which was one of the key points of our discussion. Google's summary hit the key points — the budget, the dates and who was going on the trip — and also put the summary in second person ('You called to ask about a rental property…"). I found that to be a personal touch that put Google's summary over the top. I will point out that the iPhone and Galaxy S25 Plus summaries appeared nearly instantly after the call. It took a bit for the Pixel 9 to generate its summary — not a deal-breaker, but something to be aware of. Winner: Google — The Pixel 9 gave me one of the more accurate transcripts in a pleasing format, and it personalized a summary while highlighting the key points of the conversation. I launched the built-in recording apps on each phone all at the same time so that they could simultaneously record me reading the Gettysburg Address. By using a single recording, I figured I could better judge which phone had the more accurate transcript before testing the AI-generated summary. The transcript from Samsung's Voice Recorder app suffered from some haphazard capitalization and oddly inserted commas that would require a lot of clean-up time if you need to share the transcript. Google Recorder had the same issue and, based on the transcript, seemed to think that two people were talking. The iPhone's Voice Memos app had the cleanest transcript of the three, though it did have a handful of incorrectly transcribed words. All three recording apps had issues with me saying 'nobly advanced,' with the Galaxy S25 Plus thinking I had said 'nobleek, advanced' and the iPhone printing that passage as 'no league advanced.' Still, the iPhone transcript had the fewest instances of misheard words. As for summaries, the Galaxy AI-generated version was fairly terse, with just three bullet points. Both the Pixel and the iPhone recognized my speech as the Gettysburg Address and delivered accurate summaries of the key points. While getting a summary from the iPhone takes some doing — you have to share your recording with the iOS Notes app and use the summary tool there — I preferred how concise its version was to what the Gemini AI produced for the Pixel. Winner: Apple — Not only did the iPhone have the best-looking transcript of the three phones, its summary was also accurate and concise. That said, the Pixel was a close second with its summarization feature, and would have won this category had it not heard those phantom speakers when transcribing the audio. Why keep testing the transcription feature when we've already put the recording apps through their paces? Because there could come a time when you need to record a meeting where multiple people are talking and you'll want a transcript that recognizes that. You may be in for a disappointing experience if the transcripts of me and my wife recreating the Black Knight scene from 'Monty Python and the Holy Grail' are anything to go by. Both the Galaxy and Pixel phones had problems recognizing who was speaking, with one speaker's words bleeding into the next. The Pixel 9 had more than its share of problems here, sometimes attributing an entire line to the wrong speaker. The Galaxy had more incorrectly transcribed words, with phrases like 'worthy adversary' and 'I've had worse' becoming 'where the adversary is' and '5 had worse,' respectively. The Pixel had a few shockers of its own, but its biggest issue remained the overlapping dialogue At least, those phones recognized two people were talking. Apple Intelligence's transcript ran everything together, so if you're working off that recording, you've got a lot of editing in your future. With this test, I was less interested in the summarization features, though the Pixel did provide the most accurate one, recognizing that the dialogue was 'reminiscent' of 'Monty Python and the Holy Grail.' The Galaxy AI-generated summary correctly deduced that the Black Knight is a stubborn person who ignores his injuries, but wrongly concluded that both speakers had agreed the fight was a draw. The iPhone issued a warning that the summarization tool wasn't designed for an exchange like this and then went on to prove it with a discombobulated summary in which the Black Knight apparently fought himself. Winner: Samsung — Galaxy AI had easier-to-correct errors with speakers' lines bleeding into each other. The Gemini transcript was more of a mess, but the summary nearly salvaged this test for Google. Of all the promised benefits of AI on phones, few excite me more than the prospect of a tool that can read through email chains and surface the relevant details so that I don't have to pick through each individual message. And much to my delight, two of the three phones I've tested stand out in this area. I'm sad to say it isn't the Galaxy S25 Plus. I found the feature a bit clunky to access, as I had to use the built-in Internet app to go to the web version of Gmail to summarize an exchange between me and two friends where we settled on when and where to meet for lunch. Galaxy AI's subsequent summary included the participants and what we were talking about, but it failed to mention the date and location we agreed upon. Both the Pixel and the iPhone fared much better. Gemini AI correctly listed the date, time and location of where we were going to meet for lunch. It even spotted a follow-up email I had sent en route warning the others that I was running late. Apple Intelligence also got this feature right in the iPhone's built-in Mail app. I think the Pixel has the better implementation, as getting a summary simply requires you to tap the Gemini button for all the key points to appear in a window. iOS Mail's summary feature lives at the top of the email conversation so you've got to scroll all the way up to access your summary. Winner: Google — The Pixel and the iPhone summarized the message chain equally well, but Google's implementation is a lot easier to access. In theory, a summary tool for web pages would help you get the key points of an article quickly. The concern, though, is that the summary proves to be superficial or, even worse, not thorough enough to recognize all the key points. So how do you know how accurate the summary is? I figured to find out, I'd run one of my own articles through the summary features of each phone — this article about the push to move iPhone manufacturing to the U.S., specifically. I mean, I know what I wrote, so I should be in a good position to judge if the respective summary features truly got the gist of it. Galaxy AI did, sort of, with its summary consisting of two broadly correct points that the Trump administration wants to move phone manufacturing to the U.S. and that high labor costs and global supply chain automation are the big roadblocks. That's not horribly inaccurate, but it is incomplete, as the article talked more about the lack of dedicated assembly plants and equipment in the U.S. The iPhone's summary — appearing as a tappable option in the menu bar of Safari — was a little bit more detailed on the key roadblock, while also noting the potential for rising prices of U.S.-built phones. However, the summary provided via Gemini AI is far and away the most substantive. It specifically calls out a push for reshoring, notes what Apple already produces in the U.S., and highlights multiple bullet points on the difficulties of U.S. phone manufacturing. Winner: Google — Summaries don't always benefit from being brief, and the Galaxy AI-generated summation of my article hits key points without sacrificing critical details and explanations. You can read that summary and skip my article — please don't, it would hurt my feelings — and still get a good grip on what I had written. Sometimes, notes can be so hastily jotted down, you might have a hard time making sense of them. An ideal AI summary tool would be able to sort through those thoughts and produce a good overview of the ideas you were hoping to capture. If you remember from our AI Writing Tools test, I had some notes on the new features in iOS 26 that I used to try out auto-formatting features provided by each phone's on-device AI. This time around, I tried out the summary features and found them to be generally OK, with one real standout. Both Galaxy AI and Apple Intelligence turned out decent summaries. When I selected the Key Points options in Writing Tools for iOS Notes, the iPhone featured a good general summation of changes in iOS 26, with particular attention paid to the Safari and FaceTime enhancements. Other descriptions in the Apple Intelligence-produced summary were a bit too general for my tastes. I did like the concise descriptions in the Galaxy AI summary, where my lengthy notes were boiled down to two bullet points summing up the biggest additions. It's not the most detailed explanation, but it would work as an at-a-glance synopsis before you dive into the meat of the notes themselves. Gemini AI on board the Pixel 9 struck the best overall mix between brevity and detail. Google's AI took the bullet points of my original notes and turned them into brief descriptions of each feature — a helpful overview that gets to the heart of what I'd be looking for in a summary. Winner: Google — While Galaxy AI scores points for getting right to the point in its summary, the more useful recap comes from Gemini AI's more detailed write-up. If we had restricted these tests to transcripts, it might have been a closer fight, as both Apple and Samsung held their own against Google in converting recordings to text. But throw summaries into the mix, and Google is the clear winner, taking the top spot in four of our six tests. Even in the tests where the Pixel was bested by either the iPhone or the Galaxy S25 Plus, it didn't lag that far behind. Some of this comes down to what you prefer in a summarization tool. If it's concise summaries, you may be more favorably inclined to Galaxy AI than I was. Apple Intelligence also shows some promise that would benefit from fine-tuning to make its tools easier to access. But for the best experience right now, Google is clearly the best at transcription and summarization.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store