
Meta's smart glasses can now describe what you're seeing in more detail
Meta announced two new features designed to assist blind or low vision users by leveraging the Ray-Ban Meta smart glasses' camera and its access to Meta AI. The news came as part of Global Accessibility Awareness Day.
Rolling out to all users in the US and Canada in the coming weeks, Meta AI can now be customized to provide more detailed descriptions of what's in front of users when they ask the smart assistant about their environment. In a short video shared alongside the announcement, Meta AI goes into more detail about the features of a waterside park, including describing grassy areas as being 'well manicured.'
The feature can be activated by turning on 'detailed responses' in the Accessibility section of the Device settings in the Meta AI app. Although it's currently limited to users in the US and Canada, Meta says detailed responses will 'expand to additional markets in the future,' but provided no details about when or which countries would get it next.
First announced last September as part of a partnership with the Be My Eyes organization and released last November in a limited rollout that included the US, Canada, UK, Ireland, and Australia, Meta also confirmed today that its Call a Volunteer feature will 'launch in all 18 countries where Meta AI is supported later this month.'
Blind and low vision users of the Ray-Ban Meta smart glasses can use the feature to connect to a network of over 8 million sighted volunteers and get assistance with everyday tasks such as following a recipe or locating an item on a shelf. By saying, 'Hey Meta, Be My Eyes,' a volunteer will be able to see a user's surroundings through a live feed from the glasses' camera and can provide descriptions or other assistance through its open-ear speakers.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Tech giants' indirect emissions rose 150% in three years as AI expands, UN agency says
By Olivia Le Poidevin GENEVA (Reuters) -Indirect carbon emissions from the operations of four of the leading AI-focused tech companies, Amazon, Microsoft, Alphabet and Meta, rose on average by 150% from 2020-2023, as they had to use more power for energy-demanding data centres, a United Nations report said on Thursday. The use of artificial intelligence is driving up global indirect emissions because of the vast amounts of energy required to power data centres, the report by the International Telecommunication Union (ITU), the U.N. agency for digital technologies, said. Indirect emissions include those generated by purchased electricity, steam, heating and cooling consumed by a company. Amazon's operational carbon emissions grew the most at 182% in 2023 compared to three years before, followed by Microsoft at 155%, Meta at 145% and Alphabet at 138%, according to the report. The ITU tracked the greenhouse gas emissions of 200 leading digital companies between 2020 and 2023. Meta, which owns Facebook and WhatsApp, pointed Reuters to its sustainability report that said it is working to reduce emissions, energy and water used to power its data centres. The other companies did not respond immediately to requests for comment. As investment in AI increases, carbon emissions from the top-emitting AI systems are predicted to reach up to 102.6 million tons of carbon dioxide equivalent (tCO2) per year, the report stated. The data centres that are needed for AI development could also put pressure on existing energy infrastructure. "The rapid growth of artificial intelligence is driving a sharp rise in global electricity demand, with electricity use by data centres increasing four times faster than the overall rise in electricity consumption," the report found. It also highlighted that although a growing number of digital companies had set emissions targets, those ambitions had not yet fully translated into actual reductions of emissions.


CNBC
an hour ago
- CNBC
Anduril raises funding at $30.5 billion valuation in round led by Founders Fund, chairman says
Defense tech startup Anduril Industries has raised $2.5 billion at a $30.5 billion valuation, including the new capital, Chairman Trae Stephens said on Thursday. "As we continue working on building a company that has the capacity to scale into the largest problems for the national security community, we thought it was really important to shore up the balance sheet and make sure we have the ability to deploy capital into these manufacturing and production problem sets that we're working on," Stephens told Bloomberg TV at the publication's tech summit in San Francisco. Reports of the latest financing surfaced in February, around the same time the company took over Microsoft's multibillion-dollar augmented reality headset program with the U.S. Army. Last week, Anduril announced a deal with Meta to create virtual and augmented reality devices intended for use by the Army. The latest funding round, which doubles Anduril's valuation from August, was led by Peter Thiel's Founders Fund. The venture firm contributed $1 billion, said Stephens, who's also a partner at the firm. Stephens said it's the largest check Founders Fund has ever written. Since its founding in 2017 by Oculus creator Palmer Luckey, Anduril has been working to shake up the defense contractor space currently dominated by Lockheed Martin and Northrop Grumman. Anduril has been a member of the CNBC Disruptor 50 list three times and ranked as No. 2 last year. Luckey founded Anduril after his ousting from Facebook, which acquired Oculus in 2014 and later made the virtual reality headsets the centerpiece of its metaverse efforts. Stephens emphasized the importance of the recent partnership between the two sides, and "Palmer being able to go back to his roots and reach a point of forgiveness with the Meta team." In April, Founders Fund closed a $4.6 billion late-stage venture fund, according to a filing with the SEC. A substantial amount of the capital was provided by the firm's general partners, including Stephens, a person familiar with the matter told CNBC at the time. Anduril is one of the most highly valued private tech companies in the U.S. and has been able to reel in large sums of venture money during a period of few big exits and IPOs. While the IPO market is showing signs of life after a three-plus year drought, Anduril isn't planning to head in that direction just yet, Stephens said. "Long term we continue to believe that Anduril is the shape of a publicly traded company," Stephens said. "We're not in any rapid path to doing that. We're certainly going through the processes required to prepare for doing something like that in the medium term. Right now we're just focused on the mission at hand, going at this as hard as we can."


Fast Company
2 hours ago
- Fast Company
Enjoy ‘AI slop' summer. What's coming next is worse
Welcome to AI Decoded, Fast Company 's weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. 'AI Slop' summer is here AI image and video generation tools have gone mainstream, with millions creating content and using them on platforms like TikTok and YouTube. Social networks such as Facebook and Pinterest are also seeing a surge in AI-generated posts. Meta is actively promoting this trend, as AI content is easy to produce and often drives higher engagement, creating more opportunities to sell ads. Much of the AI-generated content is what critics call 'AI slop'—low-quality material often produced by low-wage workers in developing countries aiming to harvest clicks on platforms like YouTube, Facebook, and TikTok. This content frequently spreads further via messaging apps like WhatsApp and is often political in nature. One growing genre features right-wing fantasy videos portraying acts of revenge or defiance by MAGA figures such as Donald Trump or Pam Bondi. These are typically just still images with overlaid text—clearly fictional. (Left-leaning versions exist too, though they more often rely on real footage, such as Jamie Raskin or Jasmine Crockett dismantling Republican talking points in Congress.) AI-generated content is also increasingly surfacing in search results, often pushing aside higher-quality human-created material. E-commerce platforms like Amazon are flooded with AI-generated product descriptions, user reviews, and even entire books. Some news organizations have started publishing AI-written articles, especially in sports and news roundups—many riddled with inaccuracies. Recently, the Chicago Sun-Times and The Philadelphia Inquirer unintentionally ran book list inserts featuring AI-generated descriptions of books that don't actually exist. Right now, much of the AI-generated content online can still be distinguished from genuinely human-made material. Take, for example, a viral AI video from April that depicted overweight U.S. factory workers (a satire of Trump's tariff policies). It looked fairly realistic but still gave off that unmistakable 'generated' vibe. Still, the line is blurring fast. Consider the recent viral clip of an Australian woman trying to pass airport security with her ' service kangaroo.' It racked up over a million likes before it was revealed to be AI-generated. Some viewers saw through it—many did not. The video proved that with a semi-plausible premise and decent AI tools, the boundary between real and fake can dissolve entirely. It's not hard to see where this is going. Google's new Veo 3 video generation tool is a case in point: The sample videos are alarmingly realistic. Time recently showed how these tools can create convincing deepfakes of political riots and election fraud. AI-generated content has been advancing for years, but we may have arrived at a moment where even video—once the hardest medium to fake—can no longer be trusted. With more powerful tools and social platforms eager to boost engagement, we're likely heading toward a web saturated with AI-generated content. And when anything can be fake, everything becomes suspect. Are we ready for the 'zero-trust' internet? Reddit sues Anthropic over AI training data The social platform Reddit says the AI company Anthropic has used content created by Reddit users to train AI models in ways that violate its policies. In a lawsuit filed Wednesday in a San Francisco court, Reddit accused Anthropic of using users' posts without permission, causing harm to the platform. AI companies rely heavily on information stores like Reddit to train the large language models that power popular chatbots such as ChatGPT and Anthropic's Claude. Reddit is seen as a particularly valuable resource because it holds millions of human-to-human conversations across thousands of topics, spanning the past two decades. The conversations are not only valuable for their content, but for how authentically they reflect the way people write and speak. No wonder Reddit cofounder and CEO Steve Huffman calls it 'the most human place on the internet.' And content licensing for AI training is a big and growing business for the platform. Reddit's shares on the New York Stock Exchange finished the day up more than 7% after news of the lawsuit broke Wednesday. The company has already formed content licensing agreements with Google and OpenAI (Sam Altman is a major shareholder in Reddit). It's possible that the lawsuit was filed after Reddit and Anthropic failed to come to terms on a content licensing agreement. Reddit certainly isn't the first content company to sue a well-funded AI lab for alleged misuse of data. OpenAI, Perplexity, Google, and others have all been the target of legal actions related to training data. Many of these cases center on the question of whether or not data that's publicly available on the internet falls under the 'fair use' safe harbor of the Copyright Act, rendering it fair game for AI training. Trump's foreign student ban: a master class in the art of the self-own Secretary of State Marco Rubio said last week that the U.S. will begin revoking visas for visiting Chinese students, including those in 'critical fields,' and will tighten visa requirements for future applicants. The Trump administration repeatedly claims it wants America to win the global AI race, while being openly hostile to the very brains that could help the U.S. achieve that goal. Research from the National Foundation for American Policy shows that two-thirds (66%) of U.S.-based AI startups have immigrant cofounders, and 55% of billion-dollar startups were founded or cofounded by immigrants. Meanwhile, other countries are rolling out the red carpet. The Hong Kong University of Science and Technology offered guaranteed admission to any Harvard international student. Germany and Ireland are courting current and prospective Harvard students. China, too. As AI impacts talent needs, foreign students will be needed to fill demand. Because AI coding assistants are significantly increasing the productivity of individual engineers, big tech companies are investing less in entry-level programmers (and more in GPUs and data centers). CEO Satya Nadella says 20% to 30% of Microsoft code is now AI-generated, and that he expects that rate to grow to 95% by 2030. Tech companies will likely need people with PhDs or other graduate-level degrees to fill more specialized roles such as those responsible for training and steering AI models. And that talent pool isn't big enough. International graduate students with advanced technical skills are more valuable than ever. The Administration is signaling a retreat from the global competition for AI talent.