logo
Meta Aria Gen 2 glasses for research hint at what's next for its wearables

Meta Aria Gen 2 glasses for research hint at what's next for its wearables

Meta has unveiled its second-generation prototype smart glasses for research, named 'Aria Gen 2.' The company said that the glasses are designed as a research platform for augmented reality (AR), artificial intelligence (AI), and robotics. Meta has clarified that Aria Gen 2 is not intended for consumer release, but rather as a tool for researchers and developers. However, the new prototype offers a glimpse into the direction Meta might take with future consumer-facing smart glasses.
Currently, Meta offers its Ray-Ban Meta smart glasses to consumers, featuring built-in cameras, microphones, and integration with the Meta AI assistant.
Meta Aria Gen 2 glasses: Details
Meta describes Aria Gen 2 as a wearable device that 'combines the latest advancements in computer vision, machine learning, and sensor technology.' The company said that its lightweight and compact design makes it suitable for researchers working across various environments to collect data or prototype new experiences.
One of the key upgrades in Aria Gen 2 is its full eye-tracking system, capable of monitoring gaze per eye, vergence point, blink detection, pupil centre and diameter, corneal centre, and more. It also includes a hand-tracking system that captures hand motion in 3D space, producing articulated joint poses. Meta says this can be used in tasks such as training robotic hands.
Another highlight is the system's ability to track movement in six degrees of freedom (6DOF) using Visual Inertial Odometry (VIO). This enables accurate spatial awareness and mapping of the surrounding environment.
Additionally, Aria Gen 2 includes a PPG heart rate sensor and a contact microphone, both embedded in the nosepad. According to Meta, this placement improves their functionality—for instance, the contact microphone enhances voice reception in noisy environments.
Key hardware features of Aria Gen 2 include:
12MP RGB camera
4x CV (Computer Vision) cameras
Eye tracking cameras
7x spatial microphones
Contact microphone
Stereo speakers
USB-C port
Privacy switch
Ambient light sensor
PPG heart rate sensor
Barometer
Accelerometer and gyroscope
Meta Aria Gen 2 glasses: Availability
Meta will begin accepting applications to work with Aria Gen 2 later this year. Researchers can join the Aria Gen 2 interest list through Meta's website. Meanwhile, applications for the Aria Research Kit using Gen 1 glasses remain open on a rolling basis.
The company will also showcase the Aria Gen 2 glasses at the upcoming Computer Vision and Pattern Recognition Conference (CVPR) 2025, through a series of interactive demos.
Meta smart glasses: What is next
While Aria Gen 2 is focused on AI and robotics research, it differs from Meta's previous prototype augmented reality glasses known as 'Orion,' which were geared toward holographic projection and immersive AR experiences. The next generation of Meta smart glasses may borrow elements from both Aria and Orion.
In addition, Meta is reportedly collaborating with Oakley, a brand under EssilorLuxottica (also the parent of Ray-Ban), to develop smart glasses tailored for athletes. The company continues to offer Ray-Ban Meta smart glasses as its primary consumer product in the wearables category.
Meta's head start in the smart glasses race is now facing new competition. At Google I/O last month, Google previewed its Android XR-powered smart glasses, showcasing features like messaging, navigation, and real-time translation using built-in lens displays. Apple is also believed to be working on its own pair of smart glasses to compete with Meta, with a launch expected by the end of next year.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Microsoft CEO Satya Nadella to Computer Science students: All of us are going to be more ...
Microsoft CEO Satya Nadella to Computer Science students: All of us are going to be more ...

Time of India

time8 hours ago

  • Time of India

Microsoft CEO Satya Nadella to Computer Science students: All of us are going to be more ...

Microsoft CEO Satya Nadella Microsoft CEO Satya Nadella has shared an advice for aspiring tech professionals. The CEO recently appeared in a conversation with tech YouTuber Sajjaad Khade where he urged students to focus on building strong fundamentals in computational thinking , even as AI shapes the software industry. He stressed that despite AI's growing role in coding, the ability to break down problems logically and design systematic solutions remains essential. However, he warned that even with AI, success still depends on giving clear, structured instructions—a skill that blends technical knowledge with systems thinking. 'The path to being that software architect gets speeded up,' Nadella said, adding that soon, 'All of us are going to be more software architects.' Getting fundamentals of software is important: Satyla Nadella During the conversation , Khade asked Nadella: 'In this world of AI, if I'm a beginner, just want to break into tech, what's your number one piece of advice?' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Eat 1 Teaspoon Every Night, See What Happens A Week Later [Video] getfittoday Undo Nadella replied saying: 'Just getting real fundamentals of software (if you're a software engineer), I think matters a lot.' Adding further, he said 'To me, having the ability to think computationally (is important).' Nadella explained using his own example saying he was able to fix a bug by just assigning it to the CoPilot coding agent. 'Except I was thinking about it, it was a pretty cool issue, right? The issue was I did a filter, which was basically a percentile.. Creating a feature. But then I said, "Oh man, this is, like, you know, I could, you know, recount what is a SQL, right?' When Satya Nadella revealed that up to 30% of Microsoft's code is now written by AI During a conversation with Meta CEO Mark Zuckerberg earlier this year, Microsoft CEO Satya Nadella revealed that AI now writes up to 30% of Microsoft's code base. 'I'd say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,' Nadella then said. What to Expect at WWDC 2025: iOS 19, AI, M4 Macs & More

UK regulator leads crackdown on 'finfluencers'
UK regulator leads crackdown on 'finfluencers'

Time of India

timea day ago

  • Time of India

UK regulator leads crackdown on 'finfluencers'

LONDON: Market regulators from six countries are cracking down on the illegal promotion of financial products by influencers on social media, UK officials said Friday. Britain's Financial Conduct Authority (FCA) said the action, which began on Monday, has resulted in three arrests in the UK and the authorisation of criminal proceedings against three individuals. Tired of too many ads? go ad free now The crackdown is being conducted jointly with regulators from Italy, Canada, Hong Kong, Australia, and the United Arab Emirates. Some 50 "warning letters" have been issued, which will result in more than 650 requests to remove content from social media platforms and more than 50 websites "operated by unauthorised finfluencers", the FCA said. It has also sent seven "cease and desist" letters and invited four so-called finfluencers for interviews. So-called finfluencers, or financial influencers, use their social media audiences to promote investment products, share advice, or offer their opinions on investments. Many act legitimately, but some "tout products or services illegally and without authorisation through online videos and posts, where they use the pretence of a lavish lifestyle, often falsely, to promote success", according to the FCA. These products can be risky, such as cryptocurrencies. "Our message to finfluencers is loud and clear," said Steve Smart, joint executive director of enforcement and market oversight at the FCA. "They must act responsibly and only promote financial products where they are authorised to do so -- or face the consequences." The announcement came as a group of British MPs said it had sent a letter to Meta, owner of Facebook and Instagram, asking for information on its approach to financial influencers. Tired of too many ads? go ad free now The letter from parliament's Treasury Committee follows evidence from FCA officials that Meta took up to six weeks to remove harmful content, longer than other platforms. "There was an isolated incident in late 2024 which resulted in a delay in actioning a small number of reports from the FCA," Meta said in a statement Friday. "This was rectified and all other relevant reports made by the FCA have been promptly processed."

Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.
Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.

Time of India

timea day ago

  • Time of India

Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.

Live Events Suicidal risk identification on SNS: The prompts fed to AI do not remain confined to tasks related to needing help in everyday activities, such as asking Alexa to play the family's favourite song, asking Siri on a random Tuesday to set a reminder, or asking Google Assistant to search the song based on humming. But what if users, in an especially low moment, were to ask, 'Am I okay?' Or maybe other such prompts that insinuate the user's want to harm themselves, whether through means of self-harm or and suicide attempts remain alarmingly prevalent, requiring more effective strategies to identify and support individuals at high risk. Current methods of suicide risk assessment largely rely on direct questioning, which can be limited by subjectivity and inconsistent interpretation. Simply put, their accuracy and predictive value remain limited, regardless of the large variety of scales that can be used to assess the risk; predictability remains unimproved over the past 50 intelligence and machine learning offer new ways to improve risk detection, but their accuracy depends heavily on access to large datasets that can help identify patient profiles and key risk factors. As outlined in a clinical review, AI tools can help identify patterns in the dataset, generate risk algorithms, and determine the effect of risk and protective factors on suicide. The use of AI reassures healthcare professionals with an improved accuracy rate, especially when combined with their skills and expertise, even when diagnostic accuracy could never reach 100%.According to Burke et al. , there are three main goals of machine learning studies in suicide: the first is improving the accuracy of risk prediction, the second is identifying important predictors and the interaction between them, and the last one is to model subgroups of patients. At an individual level, AI could allow for better identification of individuals in crisis and appropriate intervention, while at a population level, the algorithm could find groups at risk and individuals at risk of suicide attempts within these groups. Social media platforms have become both the cause and solution for the mental health crisis. While they are often criticized for contributing to the mental health crisis, these platforms also provide a rich source of real-time data to AI, enabling it to identify individuals portraying signs of suicidal intent. This is achieved by analyzing users' posts, comments, and behavioral patterns, allowing AI tools to detect linguistic cues, such as expressions of hopelessness or other emotional signals that may indicate psychological distress. For instance, Meta employs AI algorithms to scan user content and identify signs of distress, allowing the company to reach out and offer support or even connect users with crisis helplines. Studies such as those by the Black Dog Institute also demonstrate how AI's natural language processing can flag at-risk individuals earlier than traditional methods, enabling timely are also companies such as Samurai Labs and Sentinet that have developed AI-driven systems that monitor social media content and flag posts that insinuate suicidal ideation. For example, Samurai Labs 'One Life' project scans online conversations to detect signs that indicate high suicide risk. Upon detecting these indicators, the platform then leads the user to support resources or emergency assistance. In the same manner, Sentient's algorithms analyze thousands of posts on a daily basis, triggering alerts when users express some form of emotional distress, allowing for timely AI isn't a replacement for human empathy or professional mental health care, it offers a promising advancement in suicide prevention. By identifying warning signs at a much faster and more precise rate than human diagnosis and enabling early interventions, AI tools can serve as valuable allies in this fight against suicide.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store