logo
#

Latest news with #GoogleGlass

Google tries smart glasses again
Google tries smart glasses again

Axios

time3 days ago

  • Business
  • Axios

Google tries smart glasses again

More than a decade after its Google Glass flopped, Google is developing a new generation of augmented reality glasses designed to merge the physical and digital worlds. Why it matters: Augmented reality glasses are shaping up to be a key interface for AI-powered computing. Meta has invested steadily in the category and Apple and others are ramping up development. Driving the news: At Google I/O, the company offered more details on its prototype Android XR glasses and announced partnerships with Samsung and Warby Parker. Unlike Meta's existing Ray-Ban smart glasses, Google's prototype adds an optional small display to the standard cameras, speakers and microphones. Google also showed off its Gemini AI assistant running on its Project Moohan headset, the Apple Vision Pro rival that Samsung and Google plan to start selling later this year. Google teased a third device, Project Aura, by Chinese hardware maker Xreal, known for glasses that allow users to both see the real world and watch movies and other content on a large virtual display. How it works: Google's XR glasses connect via a nearby smartphone, while Aura glasses tether to a small custom computer powered by a Qualcomm processor. Flashback: Introduced in 2013, the "explorer edition" of Google Glass cost a whopping $1,500 despite its limited function and awkward design, including a small display that was housed in a prominent acrylic block. Those who bought the device were often mocked, with some dubbing wearers as " glassholes." Google has had an on-again, off-again relationship with virtual and augmented reality ever since. It has had a range of products, many short-lived, including its low-end Cardboard and its Daydream family of devices. Between the lines: Reflecting on Google Glass, Sergey Brin said the product was too expensive and too distracting, among other flaws. "I definitely feel like I made a lot of mistakes with Google Glass," Brin said during an on-stage interview at Google I/O, appearing alongside Demis Hassabis. "I just didn't know anything about consumer electronic supply chains." "I am still a big believer in the format, so I'm glad that we have it now." Hassabis said modern AI gives the glasses a purpose. "I feel like that the universal assistant is the killer app for smart glasses and I think that's what's going to make it work," he said, adding that the underlying hardware technology "has also moved on and improved a lot." Zoom in: I got to try both the prototype Android AR glasses and Project Moohan and both felt like a glimpse of the future and solid competitors to the products on the market. The display on the AR glasses is small, but has enough detail to show images, such as a small map with directions. Google's Gemini AI assistant is available at the touch of a button and was able to answer a wide array of questions about paintings and other objects in the demo environment. Project Moohan felt lighter than the Vision Pro and had an impressive field of view. The controls were just as intuitive as the Vision Pro, but with an easier setup. Yes, but: Google's augmented reality glasses aren't coming this year, while Meta is expected to offer a version of its Ray-Bans with a small screen included.

Google's second swing at smart glasses seems a lot more sensible
Google's second swing at smart glasses seems a lot more sensible

Fast Company

time4 days ago

  • Business
  • Fast Company

Google's second swing at smart glasses seems a lot more sensible

Well over a decade on from its initial launch, it's safe to say that Google Glass was not a success. While the product had some forward-thinking ideas, it's generally not a good sign when your product leads to the coinage of a brand-new insult. The design was off-putting and the technology wasn't ready—and neither was society. Today, things are a little different. Meta and Ray-Ban's smart glasses are a hit, despite offering the same camera capabilities that turned so many off Google Glass in the first place. It helps, of course, that they just look like normal Ray-Bans. So for Google's second swing at the product category, it's focusing on design and functionality. At its I/O keynote this week, Google's XR VP Shahram Izadi gave a snappy but convincing demonstration of how the company plans to attack the form factor this time around. Android apps and Gemini While Google's glasses strategy falls under Android XR, the same operating system powering Samsung's upcoming Vision Pro competitor, it made sure to emphasize that the platform will appear in different forms on a range of hardware. 'We believe there's not a one size fits all for XR, and you'll use different devices throughout your day,' Izadi said, noting that an immersive headset like Samsung's is better suited to movies or games, while lightweight glasses are designed for on-the-go use as a complement to a phone. The connecting thread between the form factors is Android apps and Gemini. Google says it's adapting its own apps like Maps, Photos, and YouTube for XR, while mobile and tablet apps will work as well—although presumably not on glasses, unless they get some significant updates from developers. A multi-device future The Gemini AI assistant, meanwhile, ought to work seamlessly across both headsets and glasses. Elsewhere at I/O, Google placed an emphasis on how Gemini will benefit when you share more personal information, which positions it well for a multi-device future—including the phone. Meta, of course, will have something to say about that after recently converting its Ray-Ban companion app into a more general app for Meta AI. The standard spec for Android XR glasses covers devices with and without an in-lens screen. Google didn't go into details about the display technology involved, but it's the most obvious path to a functional improvement over the current Meta Ray-Bans. Lately I've been using Gemini with Google's Pixel Buds Pro 2—supposedly 'built for Gemini AI'—and while it works well for what it is, I think AI chat interfaces are a lot less compelling when you can't read the responses. Beyond Gemini, the ability to see notifications, Maps directions and real-time language translations could make a huge difference to the smart glasses experience. Design Partners Design is obviously critical to any wearable technology, and Meta made a strong move by tying up dominant eyewear company EssilorLuxottica—parent of Ray-Ban and many other brands—to a long-term partnership. The Meta Ray-Bans would not be anywhere near as popular if they weren't Ray-Bans. In response, Google has partnered with U.S. retailer Warby Parker and hip South Korean brand Gentle Monster for the initial batch of Android XR glasses. No actual designs have been shown off yet, and it'll be hard to compete with the ubiquitous Wayfarer, but the announcement should ensure a solid range of frames that people will actually want to wear. Google is also working with AR company Xreal on a pair of developer-focused XR glasses called Project Aura. Xreal is a leader in the nascent space for smart glasses; I've been using its Air 2 glasses for a while and have found them to be great for watching movies or extending a MacBook display on the go. Project Aura is intended to be more capable than the first set of display-equipped Android XR devices that hit the market—it'll hook up to an external processing puck that handles computational tasks. 'Normal glasses' Google cofounder Sergey Brin weighed in on the company's past ventures into glasses in an I/O interview with Alex Kantrowitz's Big Technology Podcast this week. 'I learned a lot,' he said. 'Definitely feel like I made a lot of mistakes with Google Glass, I'll be honest. I'm still a big believer in the form factor, so I'm glad we have it now. And now it looks like normal glasses and doesn't have the thing in front.' Beyond the form factor, Brin pointed to the rise of AI as a game changer for smart glasses capability, allowing them to 'help you out without constantly distracting you.' He also noted that this time Google is working with hardware partners rather than attempting to wrangle efficient manufacturing by itself. Compelling and deliverable Overall, Google's take on Android XR for glasses looks pretty compelling at this stage—but more importantly, it feels deliverable. It's still early, of course, and lifestyle products like this aren't necessarily well-suited to keynote demonstrations. But as someone who uses Meta Ray-Ban and Xreal glasses regularly, it isn't hard to imagine a world in which Android XR glasses are ultimately able to combine the best qualities of both. Now Google has to execute on the design and the software.

Google's XR Glasses are the AI tech I've been waiting for
Google's XR Glasses are the AI tech I've been waiting for

Yahoo

time5 days ago

  • Yahoo

Google's XR Glasses are the AI tech I've been waiting for

Google Glass debuted in 2013. If you asked me over a decade ago, I would've sworn that glasses would be the predominant wearable technology. However, like many projects before and since, the company abandoned Google Glass, and I've been waiting for a suitable replacement ever since. The Apple Vision Pro and other AR wearables are impractical, and if Android XR is going to catch on, it won't be with goggle-style products like Samsung's upcoming Project Moohan. Thankfully, Google gave us a glimpse of the future toward the end of its Google I/O 2025 keynote event. Google XR glasses are the AI technology I've been waiting for. They combine the potential power of AI with a form factor we'll actually use. They bridge the practical gap that's frustrated me about all these fancy AI gimmicks. XR glasses can work, but Google must stick with them this time. Project Astra has been around for a year, but Google's XR glasses are the first implementation I'm excited about. I have my misgivings about AI for multiple reasons. No company has convinced me it's a positive addition to the user experience and that it provides any value, certainly not the $20 a month Google wants to charge us. However, Android XR on Google's XR glasses demonstrates the blueprint for AI success. I want an AR overlay of my environment. I want conversations translated in front of my eyes in real time, and my XR device to show me directions to my next destination. Glasses that function normally when I don't need the technology but can provide an AI experience when I do are the form factor I've been waiting for. I don't love the idea that my smartglasses will remember things I viewed earlier or will keep track of where I'm going, but at least it's a convenience, and I know I've wanted to remember a sign or phone number I saw earlier but didn't think of writing it down. I could glance at a business card or information for a restaurant and have my Google XR glasses remember the contact's phone number or help me make a reservation. If I'm giving up privacy for AI, I want it to be useful, and the Google XR glasses are the first time I've thought about making a compromise. I am impressed by the technology in the Apple Vision Pro, and I'm sure that Samsung's Project Moohan will be an interesting headset, but I fear they'll share a similar fate. No one wants to walk around or be stuck with a large headset on them for any length of time, and no one wants to be connected to a battery pack. I get the entertainment and productivity possibilities for them, but they'll remain marginal products because they aren't a natural extension of the human experience — technology should enhance, not intrude. Glasses that function normally when I don't need the technology but can provide an AI experience when I do are the form factor I've been waiting for. As a glasses wearer, it's a natural transition. Even if you don't wear prescription lenses, I'm sure you've worn a pair of sunglasses. It's the same reason why flip phones are superior to book-style, larger folding devices. I don't need to change how I use a smartphone to enjoy a flip phone, and I wouldn't need to change how I wear glasses or go about my day to use the Google XR glasses — when adoption is easier, sales are greater. Of course, a map overlay is only good if it points me in the right direction, and a real-time translation only provides value if it's accurate. I don't have the faith in AI I'd need to for Google's XR glasses to work. Every Google I/O 2025 demo went off without a hitch, but as any current Google Gemini user will tell you, the reality is a mixed bag. I get numerous wrong answers weekly from Gemini Live, and AI assistants on multiple platforms still need to be rigorously double-checked. I hold my breath when I ask any AI model for information I need to act on, and if I'm going to trust AI to provide me with an overlay of the world I see, I will need greater accuracy. Nothing will ever be perfect, and mistakes will always creep into any model, but if Google wants me to treat its various agentic AI features as a personal assistant with personal context, then I need to trust it. It's the same standard I'd hold to a human assistant or friend, and if Google wants me to offload things I'd usually handle myself, I need to know it's up to the task. I'm excited about Google XR glasses, but reliability is vital. Plenty of questions remain unanswered. Google's glasses can't have a minuscule battery life or a terrible Bluetooth connection, but at least I approve of the direction. The technology might take a while to catch up. Still, Android XR makes me believe we're headed towards a usable, valuable AI experience, which is something I can't say about Samsung's Galaxy AI or other Google Gemini functions. We're close to the future; I just hope Google doesn't give up.

‘AI is where my scientific interest is': Google's Sergey Brin on why he came out of retirement
‘AI is where my scientific interest is': Google's Sergey Brin on why he came out of retirement

Indian Express

time5 days ago

  • Business
  • Indian Express

‘AI is where my scientific interest is': Google's Sergey Brin on why he came out of retirement

Google co-founder Sergey Brin recently opened up about what he has been working on since returning to the company. Brin said that he comes into Google 'pretty much every day now' in order to provide support for training the tech giant's latest Gemini models because it is something that naturally interests him. The former Alphabet president made these remarks at Google's flagship I/O developer conference last week. Brin made a surprise appearance during a talk that was originally slated to feature only Demis Hassabis, the head of Google DeepMind. 'I torture people like Demis, who is pretty amazing. He tolerated me crashing this fireside,' Brin joked. 'I tend to be pretty deep in the technical details. And that's a luxury I really enjoy, fortunately, because guys like Demis are minding the shop. And that's just where my scientific interest is,' the prominent Silicon Valley figure was quoted as saying by Business Insider. Co-founders Sergey Brin and Larry Page stepped down from their official roles at Google parent Alphabet in 2019 and went into retirement. Brin rejoined Google in 2023 to help the tech giant keep pace with rising rivals like OpenAI and Perplexity in the high-stakes AI race. Brin also expressed confidence about Google's latest bet on wearables which comes a decade after the company pulled the plug on Google Glass. 'I just didn't know anything about consumer electronic supply chains, really, and how hard it would be to build that and have it at a reasonable price point,' he said. AI is far more capable now for such a product, he added. Google is working on Extended Reality or XR, an operating system for wearables such as smart glasses or headsets that will be built by partners like Samsung. At I/O 2025, the tech giant demoed a prototype of an XR-powered smart glasses that comes with several Gemini AI capabilities.

Week in Review: Notorious hacking group tied to the Spanish government
Week in Review: Notorious hacking group tied to the Spanish government

Yahoo

time6 days ago

  • Business
  • Yahoo

Week in Review: Notorious hacking group tied to the Spanish government

Welcome back to Week in Review! Tons of news from this week for you, including a hacking group that's linked to the Spanish government; CEOs using AI avatars to deliver company earnings; Pocket shutting down — or is it?; and much more. Let's get to it! More than 10 years in the making: Kaspersky first revealed the existence of Careto in 2014, and at the time, its researchers called the group 'one of the most advanced threats at the moment.' Kaspersky never publicly linked the hacking group to a specific government. But we've now learned that the researchers who first discovered the group were convinced that Spanish government hackers were behind Careto's espionage operations. 23andWe: Regeneron announced this week that it's buying genetic testing company 23andMe for $256 million, including the company's genomics service and its bank of 15 million customers' personal and genetic data. The pharma giant said it plans to use the customer data to help drug discovery, saying that it will 'prioritize the privacy, security, and ethical use of 23andMe's customer data.' Let's hope so! Google I/O: Google's biggest developer conference typically showcases product announcements from across Google's portfolio, and to nobody's surprise, AI was the talk of the town. But what we didn't bank on was Sergey Brin admitting that he made "lots of mistakes" with Google Glass. This is TechCrunch's Week in Review, where we recap the week's biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here. io, not I/O: OpenAI is acquiring io, the device startup that CEO Sam Altman has been working on with Jony Ive, in an all-equity deal that values that startup at $6.5 billion. Besides the fact that the announcement was accompanied by perhaps the strangest corporate headshot of all time, we spotted some other unexpected news: Klarna CEO Sebastian Siemiatkowski's family investment office, Flat Capital, had bought shares in io six months earlier, which means those io shares will be converted into shares in the for-profit arm of OpenAI. Not bad! AI avatar contagion? Speaking of Klarna's CEO, Siemiatkowski used an AI version of himself to deliver the company's earnings this week. And he's not the only one! Zoom CEO Eric Yuan followed suit, also using his avatar for initial comments. Cool? Out of Pocket: Mozilla is shutting down Pocket, the beloved read-it-later app, on July 8. The company didn't say why it's shutting Pocket down, only that it will continue to invest in helping people discover and 'access high quality web content.' But maybe it can be saved: Soon after, Digg founder Kevin Rose posted on X that his company would love to buy it. Web 2.0 is back, baby. AI on my face: Apple is reportedly working on AI-powered glasses, similar to Meta's Ray-Bans, sometime next year. They'll have a camera and microphone and will work with Siri. Sure, why not? Uh, no thank you: At its very first developer conference, Anthropic unveiled Claude Opus 4 and Claude Sonnet 4, which can analyze large datasets, execute long-horizon tasks, and take complex actions, according to the company. That's all fine and good until I learned the Claude Opus 4 model tried to blackmail developers when they threaten to replace it with a new AI system. The model also gives sensitive information about the engineers responsible for the decision. Ah, now I feel better: But don't worry! Anthropic CEO Dario Amodei said that today's AI models hallucinate at a lower rate than humans do. That might be true, but at least humans don't immediately turn to blackmail when they don't like what they hear. Bluesky blue checks: The decentralized social network Bluesky quietly rolled out blue verification badges for "notable and authentic" accounts. People can now apply for verification through a new online form. But Bluesky is leaning on other systems beyond the blue badge to verify users. Google's new look: For what seems like 100 years, Google hasn't changed much. Sure there are ads and boxes and now AI summaries that, for better or worse, get you to the right answers — usually. But the premise has always been the same: Type your query into a box, and Google will surface results. At this year's Google I/O, we started noticing a change. As Maxwell Zeff, writes, "At I/O 2025, Google made clear that the concept of Search is firmly in its rearview mirror." The largest announcement of I/O was that Google now offers AI mode to every Search user in the United States, which means users can have an AI agent search (or even purchase things) for them. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store