logo
Xreal teases Project Aura smart glasses for Android XR

Xreal teases Project Aura smart glasses for Android XR

The Verge20-05-2025

The Google smart glasses era is back, sort of. Today, Google and Xreal announced a strategic partnership for a new Android XR device called Project Aura at the Google I/O developer conference.
This is officially the second Android XR device since the platform was launched last December. The first is Samsung's Project Moohan, but that's an XR headset more in the vein of the Apple Vision Pro. Project Aura, however, is firmly in the camp of Xreal's other gadgets. The technically accurate term would be 'optical see-through XR' device. More colloquially, it's a pair of immersive smart glasses.
Xreal's glasses, like the Xreal One, are like embedding two mini TVs into what looks like a regular — if a bit chunky — pair of sunglasses. Xreal's previous gadgets let you plug into a phone or laptop and view whatever's on the screen, be it a show or a confidential document you want to edit on a plane. The benefit is that you can change the opacity to view (or block out) the world around you. That's the vibe Project Aura's giving off, too.
Details are sparse — Xreal spokesperson Ralph Jodice told me we'll learn a bit more at Augmented World Expo next month. But we know it'll have Gemini built-in, as well as a large field-of-view. In the product render, you can also see what looks like cameras in the hinges and nose bridge, plus microphones and buttons in the temples.
That hints at a hardware evolution compared to Xreal's current devices. Project Aura will run a Qualcomm chipset optimized for XR, though we don't know exactly which one. Like Project Moohan, Project Aura is counting on developers to start building apps and use cases now, ahead of an actual consumer product launch. Speaking of, Google and Xreal said in a press release that Android XR apps developed for headsets can be easily brought over to a different form factor like Project Aura.
Back when I first demoed Android XR, I was told that while Google had built prototype glasses, the plan was to work with other partners to produce a viable product. That demo also made it abundantly clear that it viewed XR devices as a key vehicle for Gemini. So far, everything we know about Project Aura is aligned with that strategy. Meaning, Google's approach to this next era of smart glasses is similar to how it first tackled Wear OS — Google provides the platform, while third parties handle the hardware. (At least, until Google feels like it's ready to jump into the fray itself.) That makes a ton of sense given Google's fraught history with smart glasses hardware. But given the momentum we've seen through Project Astra and now, Android XR making it into the main Google I/O keynote? 'Google' smart glasses are back on the menu.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

These New Pixel 10 Features Will Challenge The Competition
These New Pixel 10 Features Will Challenge The Competition

Forbes

time22 minutes ago

  • Forbes

These New Pixel 10 Features Will Challenge The Competition

The new Pixel 9 With the launch of Android 16, many expect the first smartphones to ship with the latest version of the OS will be Google's Pixel 10 and Pixel 10 Pro. While the focus will no doubt be placed on both the new capabilities of Android and the increased application of artificial intelligence, some huge hardware changes should not go unnoticed. The changes focus on the camera. It's already clear that Google is adding a telephoto lens to the Pixel 10; the Pixel 9's wide-angle and ultrawide-angle lens will be joined by a telephoto lens. This isn't a direct addition… the Pixel 9's with the 50 megapixel wide and 48 megapixel ultrawide will be bumped down to a 48 megapixel wide and 13 megapixel ultrawide pairing (a pairing that matches that of the Pixel 9a). Nevertheless, the telephoto will be welcome both in use and by the marketing team. The camera system is expected to feature gimbal-like stabilization across the entire Pixel 10 family. Using a mix of optical image stabilization, software-based electronic image stabilization, and AI algorithms, the Pixel 10 camera system should allow for sharper images thanks to the steadying influence of the hardware compensating for dynamic movement while the phone's camera is being used. The Pixel 10 has a critical role to play in the smartphone ecosystem. As the entry-level Pixel smartphone, it will challenge the current 'flagship-killer' handsets in price and capability. With it, Google will be looking to set the standard that consumers should expect at this price point. While the Pixel range plays a part in defining what it means to be a smartphone—be it a flagship, a foldable, or the base function of a phone—the Pixel 10 will arguably be the Pixel that can have the most significant impact on the ecosystem. Adding a telephoto lens and image stabilisation sets another marker for the competition. Whether it is a justification for a decision already made in their design process, or a push to include these elements in the next phone, the Pixel 10 represents Google's image of what a smartphone should be. And that view now includes some big steps forward for the camera. Now read the lates Pixel 10 and Android headlines in Forbes' weekly smartphone digest...

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

CNET

time24 minutes ago

  • CNET

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.

Why Waymo cars became sitting ducks during the L.A. protests
Why Waymo cars became sitting ducks during the L.A. protests

Yahoo

time25 minutes ago

  • Yahoo

Why Waymo cars became sitting ducks during the L.A. protests

Engineers working on self-driving technology have given a lot of thought to difficult questions over the years, including how to keep pedestrians safe and how to avoid collisions with other vehicles. But last weekend's protests in Los Angeles threw a spotlight on one of the thorny problems that remain for autonomous vehicles: what to do about arson, vandalism or other physical attacks. Five vehicles owned by Google's self-driving spinoff Waymo were set ablaze last Sunday during protests against the Trump administration's immigration policies. Images and video of the flaming cars quickly went viral, illustrating for a global audience how vulnerable robotaxis can be in volatile situations. For all their advanced technology, including expensive cameras and sensors, the cars seemed to be defenseless. Waymo says the five cars were in downtown Los Angeles to serve passengers when they were attacked. There were no drivers to plead for mercy, and with crowds surrounding the vehicles, there was no escape path that didn't include threatening pedestrians — something Waymo vehicles are programmed not to do. 'They're very much sitting ducks,' said Jeff Fong, who has worked at tech companies including Lyft and Postmates and now writes a newsletter about cities and technology. And it wasn't the first time Waymo was a victim of arson. Last year, a Waymo in San Francisco's Chinatown was set on fire during Lunar New Year celebrations. Police later charged a juvenile with starting the blaze, saying they had thrown a lit firework into the vehicle. Waymos have been vandalized in other ways, too, including having their tires slashed, their windshields smashed, their doors torn off and their exteriors defaced with spray paint. Local prosecutors have charged individuals in at least some cases. Part of what makes robotaxis vulnerable is their caution. While it's impossible to know if a human driver behind the steering wheel could have deterred or escaped vandalism in any specific case involving a robotaxi, driverless vehicles are generally designed to stay put if there's any risk that they'd hit a person while moving. 'There's been so much effort into making sure they can't hurt human beings,' Fong said. 'That's the problem Waymo has been solving for, rightfully so, but when you have the problem where a human wants to do harm, these cars have no countermeasures.' Autonomous technology companies, including Waymo, appear to be largely at a loss for ideas on how to deter vandalism over the long term. Their cameras may be a partial deterrent — Waymo says each of its cars has 29 cameras — and the company has cooperated with police to help find vandalism suspects after the fact. But Waymo's collection of street data through its cameras and sensors is also one of the sources of anger against the company and other startups like it. Some Uber and Lyft drivers have said that vandalism incidents bolster the importance of human drivers as a deterrent. A spokesperson for Waymo said that in response to the protests in Los Angeles and elsewhere, it was temporarily adjusting its service area. Waymo declined to make anyone available for an interview about the problems of arson and vandalism and how the company plans to deal with such incidents in the long term. The vandalism problem is mostly limited for now to Waymo, which is the biggest self-driving car company. It has about 1,500 vehicles operating in four regions, with additional cities scheduled to come online this year. But the market is set to become more competitive soon, with Tesla saying it plans to launch a robotaxi service this month in Austin, Texas, and Amazon-backed Zoox planning a service in Las Vegas and San Francisco. Representatives for Tesla and Zoox did not respond to requests for comment about how they plan to avoid incidents like last Sunday's attack on Waymo vehicles. The problem has been gnawing at robotaxi fans on message boards on Reddit. In one thread in January, users tossed around ideas like having dedicated security on motorcycles nearby or equipping Waymo vehicles with pepper spray. Adam Millard-Ball, director of the UCLA Institute of Transportation Studies, said that robotaxis are a symbolic target for some street demonstrators. 'They're attacked not because they're autonomous cars but because they're a symbol of inequality in cities and a symbol of the power of large technology companies,' he said. He noted that electric scooters are also sometimes targets. He also said it's hard to imagine what the companies, police or city officials could do to eliminate the threat entirely. 'I don't think any country in the world has eliminated vandalism in public spaces,' he said. The incidents are problematic for Waymo on multiple levels: not only the cost of repairing or replacing the vehicles, but also the reputational risk when images and videos spread widely online. There's also the possible danger to passengers. And although no passengers were harmed in the arson incidents, some passengers have been delayed or reported feeling threatened when the cars they were riding in were vandalized from the outside. Last year, a San Francisco woman posted a video online after she said two men targeted her while riding in a Waymo. Then there's the lost business from what Waymo calls 'temporary service adjustments.' In Los Angeles and San Francisco this past week, Waymo stopped serving certain areas that are part of its normal service area. People using the company's app in recent days were greeted with a message, alluding to the street protests: 'Pickup times and routing may be affected by local events. Thank you for your patience.' In San Francisco, that meant Waymo refused to take customers through or to several neighborhoods, including parts of the Financial District, the Civic Center area near City Hall and the sprawling South of Market neighborhood. Waymo also limited service to the Mission District, a historically working-class and Latino neighborhood that's also home now to many tech workers and a vibrant nightlife scene. Thousands of people attended an anti-President Trump protest in the Mission on Monday night, and the effects on Waymo reverberated for days: A post on X with an example of rerouting around the Mission went viral Wednesday, getting 1.2 million views. Searches of the Waymo app by NBC News showed the service continuing to refuse service to parts of the Mission throughout the week, including during relatively quiet morning hours and on Friday. The app labeled certain destinations as 'unreachable.' A Waymo spokesperson said: 'We're taking these heightened measures now out of an abundance of caution.' They said the situation was temporary and subject to change quickly in response to conditions on the ground. Waymo hasn't published a map of which areas are restricted. Mass anti-Trump protests advocating for 'No Kings' are scheduled for Saturday nationwide, providing another potential disruption for robotaxis. Though the service restrictions may be temporary, they struck some people as discriminatory against poorer neighborhoods, with some social media users on X calling the practice 'redlining' on the part of Waymo. The term refers to the decadeslong practice of refusing home loans to predominantly Black neighborhoods. In contrast, ride-hailing services Uber and Lyft, which use human drivers, still offered rides to the Mission in recent days, according to NBC News searches of their apps. San Francisco's Municipal Transportation Agency rerouted some buses during the height of anti-Trump administration protests, but then resumed regular service. There have been no arrests for the attacks on Waymo vehicles in Los Angeles last Sunday. On Friday, the U.S. Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) said it was taking the lead in investigating the Waymo attacks, making them the subject of a federal investigation. 'The cause of these fires is quite obvious,' ATF special agent in charge Kenneth Cooper of the Los Angeles Field Division said in a statement. 'The task at hand now is to determine who is responsible. ATF's National Response Team is going to be a tremendous asset, and we look forward to the results of their efforts to hold the responsible parties accountable,' he said. This article was originally published on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store