
Meta AI's experimental smart glasses: Advanced sensors let them see, hear, and even sense how you feel
What if your next smart glasses could track your movement, gaze, and even health variables like your heart rate to assess what's happening in your surroundings and how you're reacting to it? Yeah, that sounds like a stretch right now, but if Meta's ambitions come to fruition, its Project Aria Gen 2 devices could take eyewear computing to a truly new dimension.
We must note that the Aria Gen 2 glasses are currently reserved for select researchers, and the less advanced Ray-Ban Meta smart glasses came out only recently. Unlike them, the Aria Gen 2 would be loaded with an advanced suite of sensors and cameras to collect real-world data.
At the heart of the Aria Gen 2 are multiple computer vision cameras capable of capturing an 80-degree view, measuring depth and distance with remarkable accuracy. They also feature sophisticated eye-tracking, hand movement detection, and even a pulse sensor in the nose pad, all working together to interpret users' actions, focus, and emotional responses. This sensory fusion allows researchers to teach machines to observe, understand, and interact with the world in ways that mimic human perception.
But that's not all. Meta's vision extends far beyond today's wearable gadgets. The company sees these research tools as a foundation for future devices that could blur the line between human and machine interaction. While the Aria Gen 2 isn't available for purchase (well, because it's still not a fully finished product) and access is limited to approved applicants, the insights gained could soon shape the next generation of smart glasses for everyone.
What's Meta's goal with Project Meta's Project Aria Gen 2? A future where AI and augmented reality work seamlessly together - not just to deliver notifications, but to help machines learn from our real-world experiences. Imagine your glasses recognising when you're searching for a misplaced wallet and alerting you - or even guiding a smart assistant to help you find it. Although we're not there yet, Meta's research suggests it's only a matter of time before these innovations reach the mainstream.
It's unclear if the Aria Gen 2 will ever be available to the public, but it's possible that some iteration of it would see the light of day in the near-future. Such products are essentially laying the groundwork for innovations that have the potential to change how we live, work, and connect. Swapping out your smartphone for smart glasses is not a far-fetched idea, but for now, we've got to keep using these brick smartphones.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
8 hours ago
- Time of India
Microsoft CEO Satya Nadella to Computer Science students: All of us are going to be more ...
Microsoft CEO Satya Nadella Microsoft CEO Satya Nadella has shared an advice for aspiring tech professionals. The CEO recently appeared in a conversation with tech YouTuber Sajjaad Khade where he urged students to focus on building strong fundamentals in computational thinking , even as AI shapes the software industry. He stressed that despite AI's growing role in coding, the ability to break down problems logically and design systematic solutions remains essential. However, he warned that even with AI, success still depends on giving clear, structured instructions—a skill that blends technical knowledge with systems thinking. 'The path to being that software architect gets speeded up,' Nadella said, adding that soon, 'All of us are going to be more software architects.' Getting fundamentals of software is important: Satyla Nadella During the conversation , Khade asked Nadella: 'In this world of AI, if I'm a beginner, just want to break into tech, what's your number one piece of advice?' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Eat 1 Teaspoon Every Night, See What Happens A Week Later [Video] getfittoday Undo Nadella replied saying: 'Just getting real fundamentals of software (if you're a software engineer), I think matters a lot.' Adding further, he said 'To me, having the ability to think computationally (is important).' Nadella explained using his own example saying he was able to fix a bug by just assigning it to the CoPilot coding agent. 'Except I was thinking about it, it was a pretty cool issue, right? The issue was I did a filter, which was basically a percentile.. Creating a feature. But then I said, "Oh man, this is, like, you know, I could, you know, recount what is a SQL, right?' When Satya Nadella revealed that up to 30% of Microsoft's code is now written by AI During a conversation with Meta CEO Mark Zuckerberg earlier this year, Microsoft CEO Satya Nadella revealed that AI now writes up to 30% of Microsoft's code base. 'I'd say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,' Nadella then said. What to Expect at WWDC 2025: iOS 19, AI, M4 Macs & More


Time of India
a day ago
- Time of India
Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.
Live Events Suicidal risk identification on SNS: The prompts fed to AI do not remain confined to tasks related to needing help in everyday activities, such as asking Alexa to play the family's favourite song, asking Siri on a random Tuesday to set a reminder, or asking Google Assistant to search the song based on humming. But what if users, in an especially low moment, were to ask, 'Am I okay?' Or maybe other such prompts that insinuate the user's want to harm themselves, whether through means of self-harm or and suicide attempts remain alarmingly prevalent, requiring more effective strategies to identify and support individuals at high risk. Current methods of suicide risk assessment largely rely on direct questioning, which can be limited by subjectivity and inconsistent interpretation. Simply put, their accuracy and predictive value remain limited, regardless of the large variety of scales that can be used to assess the risk; predictability remains unimproved over the past 50 intelligence and machine learning offer new ways to improve risk detection, but their accuracy depends heavily on access to large datasets that can help identify patient profiles and key risk factors. As outlined in a clinical review, AI tools can help identify patterns in the dataset, generate risk algorithms, and determine the effect of risk and protective factors on suicide. The use of AI reassures healthcare professionals with an improved accuracy rate, especially when combined with their skills and expertise, even when diagnostic accuracy could never reach 100%.According to Burke et al. , there are three main goals of machine learning studies in suicide: the first is improving the accuracy of risk prediction, the second is identifying important predictors and the interaction between them, and the last one is to model subgroups of patients. At an individual level, AI could allow for better identification of individuals in crisis and appropriate intervention, while at a population level, the algorithm could find groups at risk and individuals at risk of suicide attempts within these groups. Social media platforms have become both the cause and solution for the mental health crisis. While they are often criticized for contributing to the mental health crisis, these platforms also provide a rich source of real-time data to AI, enabling it to identify individuals portraying signs of suicidal intent. This is achieved by analyzing users' posts, comments, and behavioral patterns, allowing AI tools to detect linguistic cues, such as expressions of hopelessness or other emotional signals that may indicate psychological distress. For instance, Meta employs AI algorithms to scan user content and identify signs of distress, allowing the company to reach out and offer support or even connect users with crisis helplines. Studies such as those by the Black Dog Institute also demonstrate how AI's natural language processing can flag at-risk individuals earlier than traditional methods, enabling timely are also companies such as Samurai Labs and Sentinet that have developed AI-driven systems that monitor social media content and flag posts that insinuate suicidal ideation. For example, Samurai Labs 'One Life' project scans online conversations to detect signs that indicate high suicide risk. Upon detecting these indicators, the platform then leads the user to support resources or emergency assistance. In the same manner, Sentient's algorithms analyze thousands of posts on a daily basis, triggering alerts when users express some form of emotional distress, allowing for timely AI isn't a replacement for human empathy or professional mental health care, it offers a promising advancement in suicide prevention. By identifying warning signs at a much faster and more precise rate than human diagnosis and enabling early interventions, AI tools can serve as valuable allies in this fight against suicide.


India Today
a day ago
- India Today
Anthropic working on building AI tools exclusively for US military and intelligence operations
Artificial Intelligence (AI) company Anthropic has announced that it is building custom AI tools specifically for the US military and intelligence community. These tools, under the name 'Claude Gov', are already being used by some of the top US national security agencies. Anthropic explains in its official blog post that Claude Gov models are designed to assist with a wide range of tasks, including intelligence analysis, threat detection, strategic planning, and operational support. According to Anthropic, these models have been developed based on direct input from national security agencies and are tailored to meet the specific needs of classified introducing a custom set of Claude Gov models built exclusively for US national security customers,' the company said. 'Access to these models is limited to those who operate in such classified environments.'Anthropic claims that Claude Gov has undergone the same safety checks as its regular AI models but has added capabilities. These include better handling of classified materials, improved understanding of intelligence and defence-related documents, stronger language and dialect skills critical to global operations, and deeper insights into cybersecurity data. While the company has not disclosed which agencies are currently using Claude Gov, it stressed that all deployments are within highly classified environments, and the models are strictly limited to national security use. Anthropic also reiterated its 'unwavering commitment to safety and responsible AI development.'Anthropic's move highlights a growing trend of tech companies building advanced AI tools for defence. advertisementEarlier this year, OpenAI introduced ChatGPT Gov, a tailored version of ChatGPT that was built exclusively for the US government. ChatGPT Gov tools run within Microsoft's Azure cloud, giving agencies full control over how it's deployed and managed. The Gov model shares many features with ChatGPT Enterprise, but it places added emphasis on meeting government standards for data privacy, oversight, and responsible AI usage. Besides Anthropic and OpenAI, Meta is also working with the US government to offer its tech for military month, Meta CEO Mark Zuckerberg revealed a partnership with Anduril Industries, founded by Oculus creator Palmer Luckey, to develop augmented and virtual reality gear for the US military. The two companies are working on a project called EagleEye, which aims to create a full ecosystem of wearable tech including helmets and smart glasses that give soldiers better battlefield awareness. Anduril has said these wearable systems will allow soldiers to control autonomous drones and robots using intuitive, AR-powered interfaces.'Meta has spent the last decade building AI and AR to enable the computing platform of the future,' Zuckerberg said. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad.'Together, these developments point to a larger shift in the US defence industry, where traditional military tools are being paired with advanced AI and wearable tech.