logo
#

Latest news with #AndroidXR

Samsung tipped to launch smart AI glasses with camera, mic, and speakers but no screen
Samsung tipped to launch smart AI glasses with camera, mic, and speakers but no screen

India Today

time2 days ago

  • India Today

Samsung tipped to launch smart AI glasses with camera, mic, and speakers but no screen

Samsung is rumoured to be preparing yet another foray into wearable tech, with a fresh pair of smart glasses tipped to arrive next year. But don't mistake them for Samsung's upcoming Android XR headset or Google's own smart eyewear. According to Korean outlet SEDaily, this mysterious new model will be a completely separate venture, leaning more towards Meta's Ray-Ban Stories than a full-blown augmented reality device. advertisementThe rumours and leaks suggest these glasses will come equipped with a speaker, microphone and camera, but notably, no display. That means they're less about overlaying virtual information in your field of vision and more about discreetly capturing moments, taking calls and perhaps chatting with an onboard this sounds familiar, it's because these specs were first mentioned last year, when reports surfaced about Samsung producing a Meta-like model. Back then, a late-2025 release was predicted, clearly a date that has slipped. Still, the idea hasn't been shelved, and sources say the project is once again gathering momentum. Meanwhile, Samsung and Google have been jointly developing a completely different pair of smart glasses, ones that do include a display. These Android XR-powered specs would allow wearers to see virtual elements overlaid onto the real world—think smartwatch meets spectacles—making them closer to a mixed-reality device than a wearable sits alongside 'Project Moohan,' Samsung's Android XR headset, which promises an enclosed, VR-style experience. It's expected to rival Apple's Vision Pro, with Android XR at the helm and Google's Gemini AI playing latest report claims Samsung's Meta-style smart glasses could see the light of day towards the end of 2025, well after the likely debut of its XR headset. But given the number of delays plaguing the sector, any dates should be taken with a healthy pinch of exactly these new glasses will operate remains a mystery. However, it wouldn't be far-fetched to imagine them running a stripped-back version of Android XR, offering integration with Gemini and possibly a Live View mode via the camera. Without a screen, their feature set could resemble that of Google's Pixel Buds with Gemini, providing AI-driven assistance and audio feedback rather than immersive Samsung, Google, Apple, and even Xiaomi vying for a slice of the smart-glasses pie, the market is starting to heat up. Samsung's two-pronged approach, one model aimed at content capture and audio interaction, the other at immersive XR experiences, could position the company uniquely, catering to both casual users and tech enthusiasts hungry for high-end the vague release window, details on pricing or technical specifications are scarce. Still, expect more concrete information to emerge early next year, especially if Samsung intends to tease the device ahead of a formal unveiling. Until then, fans will be left wondering whether Samsung's latest eyewear will be a stylish everyday gadget, a hands-free AI companion, or simply another ambitious experiment destined to join the company's growing list of tech curiosities.- Ends

Samsung tipped to be making its own Meta Ray-Ban-style smart glasses — here's when they launch
Samsung tipped to be making its own Meta Ray-Ban-style smart glasses — here's when they launch

Tom's Guide

time3 days ago

  • Business
  • Tom's Guide

Samsung tipped to be making its own Meta Ray-Ban-style smart glasses — here's when they launch

Samsung is reportedly developing a pair of Meta Ray-Ban-esque smart glasses that it hopes to reveal before the end of 2026. These would be separate from the full AR glasses, dubbed Project Haean, in development in a partnership with Google. According to the Korean publication Seoul Economic Daily (via Jukanlosreve), the display-free Samsung glasses will feature speakers, a microphone and a camera, similar to the Meta Ray-Ban glasses. SE Daily claims that Samsung is developing its smart glasses as a way to take a 'leading position' in the burgeoning market. An unnamed industry insider in the machine-translated article stated that Samsung 'must also develop its own brand' separate from Google. It's not clear from the report if Samsung will develop an in-house version of Android XR that connects to the glasses like it does with its Android-based One UI operating system. Or if the AI features will be in a separate app connected to your Galaxy phone or smart watch. With the rise of AI, Samsung apparently believes that smart glasses will supplant smartphones as the 'next generation of devices.' The simple goal is to create an affordable pair of smart glasses with AI-powered features that don't require a large mixed-reality headset. No pricing was shared, but Meta sells the Ray-Bans between $249 and $299, depending on the frames, so that is a baseline for display-free smart glasses. Samsung's Project Moohan XR headset is slated to arrive later this year alongside a set of Android-powered smart glasses around the same time. Samsung is reportedly still finalizing features and specifications of the glasses, but they are supposed to run on the same Android XR operating system as Moohan. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. These glasses would be more akin to the Xreal One Pro AR glasses. Samsung is supposedly going to emphasize comfort with Project Haean by offering options to adjust the glasses based on your face shape. As of this writing, we don't know when exactly Project Moohan or Haean will be released beyond vague 'this year' claims. For now, we have to sit tight and wait to see what Samsung has in store for us this year and next.

Smart Glasses Revolution: Inside the biggest tech trend of the next 10 years
Smart Glasses Revolution: Inside the biggest tech trend of the next 10 years

Tom's Guide

time5 days ago

  • Business
  • Tom's Guide

Smart Glasses Revolution: Inside the biggest tech trend of the next 10 years

Artificial Intelligence | Smart Glasses | Wearable TechSmartphones | iPhones | Robots | Cars | TVs Ever since I sprinted across Las Vegas in 2017 to pick up a pair of Snapchat Spectacles from a vending machine, smart glasses have changed drastically over the last eight years. From glorified camera glasses and a wearable external monitor, and all the way to an AI-infused pair of specs, we've been through it all to make it to this very moment – and the moment we're in is an interesting one. That's because we all see what we want our smart glasses to be, but in something significantly bigger: VR headsets. Currently, these are very different devices, running along two parallel trajectories of development. But after speaking to Snap, Qualcomm and more, it's clear that the race is on to find the middle ground between these two — to be first to a truly AI-infused augmented spatial future of wearables. With significant developments tackling the key challenges, this 10-year race could very much see the device that could kill the smartphone and be the next big thing. Every big company you know is in the running, with Meta's Project Orion prototype charging into the lead, Android XR and Snap's new consumer specs catching up, and even Apple is 'hell-bent' on making its own glasses. Let's take a look at where we are now, why smart glasses are indeed the next big thing, and what it will take to get there. If you take a look at the best smart glasses you can buy right now, you've got two categories: AI and AR specs. AI glasses like the Ray-Ban Meta Smart Glasses bring the power of multi-modality to something that is super wearable. And you can see the real benefits they bring — from asking quick questions like your standard smart assistant to detailed prompts understanding the world around you. In fact, sales of Ray-Ban Meta glasses so far this year have more than tripled compared to the same time last year, which is more than 200% growth. That's according to EssilorLuxottica, which owns smart glasses brands like Ray-Ban and Oakley. Every big company you know is in the running to develop smart glasses — this race could very well birth the device that kills the smartphone and becomes the next big thing in tech. For me, they really come into their own when I'm travelling. Putting ingredients on a counter and asking for a recipe of what to cook is always a massive help; live translation is a huge move to bridge the gaps of understanding; and asking for more information on historic locations gives you new context like a tour guide. Then you've got AR glasses — essentially a portable external monitor that has been shrunk down into a pair of glasses. With the micro-OLED display tech projecting into prisms in front of the lenses, you can get a 100+ inch display wherever you go. That is huge for long distance travel. Something like the Xreal One Pro specs really come in clutch for reducing the neck strain of looking down at my laptop or Steam Deck. Those prisms don't make them great for walking around with, but they are the best realization of a screen in your glasses right now. And the ever increasing capabilities to simulate an ultra-wide display or use depth-of-field tracking tech (known as 6DoF) to anchor something in place is a signifier of far greater capabilities going forward. I mean, just take a look at the spatial widgets announced in visionOS 26 — with 6DoF, that is possible with glasses. It's clear that while Apple Vision Pro opened the door to spatial computing, a whole lot of software from Cupertino's AR play to SnapOS and even Meta's OS in the Quest 3 are all previews of what you will get in glasses. Or if you wanted to go even more 'tin foil hat conspiracy' with me, I'd argue that the new Liquid Glass design motif of Apple's software is subtly training us to get used to smart glasses. That transparency does make things a little harder to read, but users will adapt — just in time for new specs. But the end-goal is far greater than that. The mission for the future is to bring both AR and AI together, as the possibilities are huge. Removing the smartphone from the equation to ensure someone is present in the moment is the pinnacle to the digital detox movement that is starting to happen — smart glasses that bring both AI and AR to the table are key to this. 'I am somewhat worried my kids think I look like this,' said Scott Myers, VP of hardware engineering at Snap Inc. — holding up a smartphone to his face and talking about how they have become distraction devices. "Specs are the next generation of computing, and they're a powerful, wearable computer in a lightweight glasses form factor. And because they naturally integrate digital experiences with the physical world and enable me to look up at the world, I'll stop pulling out my phone so much, or maybe I don't need to take my tablet with me on trips anymore." Imagine that same recipe situation as above, but with an image-based guide supplementing it, too. Or that same moment of discovering historical monuments, but having map pins identify every single one to visit. While all these companies have their own ideas of what the dream smart glasses are, all are in agreement that there are fundamental key challenges to be solved here. Displays need to get better Right now, you've got a pick of two ways to do this: a glass prism that an OLED picture is projected into (commonly called 'bird baths' and seen in the Viture Luma Pros), or a particular section of the glasses lens being etched to refract light from the arm (named 'waveguide'). Bird baths have the better, wider picture quality, but glasses have to be slightly bigger to house them — looking like the spy glasses you get at the Scholastic book fair. Meanwhile, the waveguide is certainly a lot more subtle, but being the size of a miniature postage stamp on one lens does lead to the display being way smaller and worse in quality. But companies like Lumus are quietly working on this in the background, and working with a lot of big names in the industry. The secret sauce is reflective Waveguides. 'With the geometric waveguide lenses we're making, you can get a far wider field of view while not compromising on the picture quality or brightness needed to see it in daylight, " said David Golman, VP of marketing and communications at Lumus. 'Not only that, but with the liquid crystal display potential, you can actually improve a person's vision too.' The challenge is to get the best of both worlds here — ditching the bird baths to provide full clarity of the world around you like a regular pair of glasses, while still offering that same level of screen quality for both full immersion and augmenting your surroundings. Break the reliance on other devices This comes down to one thing: getting a chip powerful enough to stuff entirely on the glasses without any need to connect to another device. At the moment, we're either limited to AR glasses having a chip that tricks your laptop into thinking you have a 32:9 ultrawide monitor on your face (typing this on my ultrawide Xreal Ones right now on a plane), or a fast but limited chip to keep latency sort of low between making an AI request through your specs and the phone doing the heavy lifting (looking at you, Ray-Ban Metas). Looking forward to the mid-term future, the answer seems to be a puck, like what you see in Meta's Project Orion – a dedicated device to fuel the experience. Other companies agree. You see this in Xreal's Project Aura and Qualcomm believes this concept is on a spectrum. 'Some operators would love a glass that is connected directly to 5G, and we will work on that. Others want sports glasses for going on a run, and others will just want a general assistant,' said Said Bakadir, VP Product Management at Qualcomm. 'I think we're gonna see that spectrum evolving from something that is minimum possible in the glass to getting rid of other devices.' However, if smart glasses are truly going to take off, there can't be any pucks or separate devices. We need it all to work entirely on the glasses for this to be the same truly disruptive iPhone-esque moment for consumer tech. Developers, developers, developers! Speaking of the iPhone, you may not know this given how much of a global icon it is now, but the real breakthrough for Apple's mini slab didn't really arrive until the app store one year later. Opening up a platform for developers to create their own experiences for people to use creates an evergrowing list of reasons to buy your device, and AR glasses need that moment. So far, there hasn't really been a shared app marketplace for people to download onto AR glasses like the app store. But two things may flip this entirely on its head: Android XR bringing the Google Play Store to specs, and Snap's new consumer glasses channeling the word of devs creating hundreds of thousands of lenses over the past few years. 'We're really here to build this with the community because it's an entirely new paradigm,' said Snap's Myers. 'There's a lot of things that will take time for people to understand and figure out. It's not just going to be, 'oh, here you go, developers — come build for this!' That's not going to work in my opinion. It's a community-led discussion and I couldn't be happier with that.' The constant stream of new apps to the smart glasses of the future needs to become as synonymous as the app store is to the iPhone. All-day stamina guaranteed Batteries are not ready for prime time in smart glasses — the longevity of lithium ion cells are always heavily compromised by the limited capacity balanced by ensuring the glasses are not too heavy on someone's face. The end result is making sure you're careful with the number of interactions you make with your Ray-Ban Meta shades at the moment. Fortunately, Meta is on the right track of improving this with the Oakley Meta HSTN glasses effectively doubling the longevity. That being said, there's still a way to go. What's the answer? Nobody is quite sure yet, but it seems to start with the direction smartphones are heading in: silicon carbon. This next generation battery tech is able to pack more power within the same space, meaning this could be a starting point to move forward. The other thing the industry has learned, just like Meta did with the Ray-Bans, is how battery life is all about calculating and optimizing the software usage to every microwatt. "I worked on smartphones for a very long time, said Myers. 'While the battery capacity has grown pretty consistently, it's really the way people are using the software that has gotten much better. We see the same trajectory for Snap OS.' If Ray-Ban Meta smart glasses prove one thing, it's that when it comes to AI devices, glasses are the best realization of that vision — better than Rabbit R1, better than the Humane AI Pin. But even more than that, we've seen multi-modal AI unlock some truly useful features in a pair of smart glasses. Because at the end of the day, you want your glasses to do more than tell you you're looking at a tree. 'AI will be the core intelligence layer. It will understand context, proactively assist, personalize the interface in real time. Wearables will evolve from tools into true companions — adaptive, discreet, and intuitive.' 'XR, for me, is the best interface to interacting with the digital world. What happened in the digital world is being transformed with AI. So it just happens that this AI requires multi-modality.' said Qualcomm's Bakadir Whether I'm exploring the world and want extra facts about a landmark, or I'm stuck on things to eat and want some assistance on what to make from the things in my fridge, having AI directly on your face is the most natural form factor. 'AI will be the core intelligence layer. It will understand context, proactively assist, personalize the interface in real time. Wearables will evolve from tools into true companions — adaptive, discreet, and intuitive.' said David Jiang, CEO of Viture. We've made small steps towards that with Snapdragon AR1+ Gen 1 — allowing you to run a 1-billion parameter AI model entirely locally. That is significant for the future of smart glasses, but it's only one step forward. Now the next step is moving into agentic AI and personalization — using data to train your own device around you for more proactive, more agentic assistance that can help before you even think you were going to look for help. Remember when the Apple Watch came out? The real reason for it existing didn't come until a few years in. When those sensors came into their own, it became the go-to health tracker that it is now. I feel that the moment is coming for smart glasses. The use cases are currently limited, but the moment we start sticking sensors on them, not only would you be able to track physical health, you could even track emotional health, too. 'We believe that understanding emotions is a force multiplier for AI, in terms of it being effective for you in the context of wearing glasses all day. If you want AI to be really effective for you, it's critical that it understands how you're feeling in real-time." said Streen Strand, Emteq CEO. And why wouldn't you? In a February survey by Sentio University, 96% of AI users reach out for some therapeutic advice. Sensor tech is looking like a key focal point of the future of smart glasses — fueling not just eye-tracking and hand gestures, but pairing with AI for more personalization. 'We believe that understanding emotions is a force multiplier for AI, in terms of it being effective for you in the context of wearing glasses all day. If you want AI to be really effective for you, it's critical that it understands how you're feeling in real-time." We've done this dance before. Remember Google Glass? There's a reason why the phrase 'glassholes' exists, and it's because of the social stigma that came with wearing this advanced piece of tech directly on your face. Every new tech category goes through a settling-in process around the way they disrupt common social cues, as they move from seeming traditionally impolite to just being the way things are. But with display tech in smart glasses, I feel that hump of social acceptance is going to take a bit more time to get used to. A great example is the Halliday glasses, which beams a 3.5-inch projected display into your eye from the top rim of the specs. All you have to do is look up at it, which on paper is seriously impressive. However, during my time talking to people wearing them at CES 2025, the amount of perceived eyerolls I got as they looked up to the screen did certainly make me feel like an inconvenience! And then more broadly with the display tech of tomorrow, you'll never really know whether someone is actually looking at you. At least with current bird bath panels making for slightly larger specs, you're giving off a big enough 'do not disturb' signal. But when they disappear and the transition to waveguide happens, it will take time for society to acclimatize. 'We all lose our time to these black rectangles called smartphones, so I see waveguides on smart glasses as a great thing to just glance at when notifications roll in without taking my phone out. But my wife is always on edge about whether I am actually paying attention to her.' said Lumus' Goldman. 'It's not like smartphones in that it's passive AI. There needs to be an AI actively listening to you that memorizes your routines, your conversations, everything about your day to deliver that efficient lifestyle.' Then, of course, there's the privacy concerns of wearing an always-on device on your face. How do you give permission to be seen by these glasses? What does that look like? We saw these become big issues with Google Glass in the early 2010s, and with a personalized AI assistant that needs to be always running to understand you, the worries will be significant and warranted. 'It's not like smartphones in that it's passive AI. There needs to be an AI actively listening to you that memorizes your routines, your conversations, everything about your day to deliver that efficient lifestyle.' said Carter Hou, CEO and Co-founder of Halliday. I know there are significant technical challenges on the road between where we are now and 2035, but more than anything, the cultural one is going to be the bigger mountain to climb. We've already gotten over the 'wearing glasses even though you don't need to' one (look at hipsters wearing spectacles with no lenses for example) — and surely it'll be a matter of time before the technological aspect just becomes a social norm, rather than people asking 'is that a camera in your glasses?' There is a grand vision for 2035, but the future of smart glasses is a lot closer than you think. I initially thought that the race to XR is only just beginning to heat up, but in reality, it's already at fever pitch. With rumored next-gen Ray-Ban Meta smart glasses, the impending launch of Snap Specs in 2026, and let's not forget Apple being 'hell-bent on creating an industry-leading product before Meta can,' we're on the precipice of seeing the next step forward in this space. But what makes this category so fascinating to me is that no one company has all the answers. Every dreamer in this area has one piece of the puzzle, and I do believe that in ten years time, these will all come together to become that next category-defining product — that smartphone moment for wearable technology. So buckle up, because it's going to be a helluva ride over the next decade. • Artificial Intelligence • Smart Glasses• Wearable Tech• Smartphones • iPhones• Robots• Cars• TVs

NotebookLM: There's Never Been a Better Time to Try Google's Best AI Tool
NotebookLM: There's Never Been a Better Time to Try Google's Best AI Tool

CNET

time07-08-2025

  • CNET

NotebookLM: There's Never Been a Better Time to Try Google's Best AI Tool

Google's NotebookLM is easily my all-time favorite AI tool. I lean on it for many things, from making sense of my nonsensical notes to grabbing just the essential pieces from otherwise hard to digest information. Whether you're just looking to get a quick sum-up of material or you're in the trenches with NotebookLM, pulling specific insights from multiple sources, it's incredibly flexible in the ways you can work with it. It's a perfect study buddy for students and a work ally for streamlining workflows and organization. Google regularly rolls out new features for NotebookLM, making it feel more robust without compromising the overall simplicity that makes it so approachable. If you're new to using it or a long-time user looking for a refresher of what's been added lately, I'll break down NotebookLM's highlights, features, and the moment it became an indispensable tool for my day to day work. For more, check out Google's plans for smart glasses with AndroidXR. Everything Announced at Google I/O 2025 Everything Announced at Google I/O 2025 Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:00 / Duration 15:40 Loaded : 0.00% 00:00 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 15:40 Share Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Everything Announced at Google I/O 2025 NotebookLM isn't just Google Keep stuffed with AI, nor is it just a chatbot that can take notes. It's both and neither. Instead of asking questions of Gemini, only for it to find an answer from the ether of the internet, NotebookLM will search only through the sources you provide it. It's a dead simple concept that feels like one of the most practical uses of AI. And Google didn't stop there. Now it can do so much more, and it'll reward your poking around to see what it can do for you. And features like its impressive Audio Overviews have since trickled down into Gemini itself, allowing them to be used in a much wider set of Google's products. What is NotebookLM? NotebookLM is a Gemini-powered note-taking and research assistant tool that can be used in a multitude of ways. It all starts with the sources you feed it, whether they're webpage URLs, YouTube videos or audio clips, allowing you to pull multiple sources together into a cohesive package and bring some organization to your scattered thoughts or notes. The most obvious use case for NotebookLM is using it for school or work. Think of it -- you've kept up with countless classes and typed notes down for every one and even perhaps recorded some lectures. Sifting through everything individually can eventually get you to some semblance of understanding, but what if you could get them to work together? Once you've uploaded your sources, Gemini will get to work to create an overall summary of the material. From there, you can begin asking Gemini questions about specific topics on the sources and information from the sources will be displayed in an easy-to-understand format. This alone may be enough for some people just looking to get the most out of their notes, but that's really just scratching the surface. Available for desktop and mobile NotebookLM has a three-panel layout. Screenshot by Blake Stimac/CNET NotebookLM has been available for a while now on the desktop and is broken into a three-panel layout, consisting of Source, Chat and Studio panels. Both the Source and Studio panels are collapsible, so you can have a full-screen chat experience if you prefer. While the Source and Chat panels are pretty self-explanatory, the Studio panel is where magic can happen (though some of the features can also be created directly from the Chat panel). This is where you can get the most out of your NotebookLM experience. The NotebookLM app: Information alchemy in your pocket The mobile app for Android and iOS launched the day before Google I/O 2025. Screenshots by Blake Stimac/CNET Those familiar with the desktop experience will feel right at home with the mobile apps for iOS and Android. The streamlined app allows you to switch between the Source, Chat and Studio panels via a menu at the bottom. When you go to the view that shows all of your notebooks, you'll see tabs for Recent, Shared, Title and Downloaded. While not everything is on the app yet, it's likely just a matter of time before it matches the web version's full functionality. Audio Overviews If you didn't hear about NotebookLM when it was first announced, you likely did when Audio Overviews were released for it. Once you have at least one source uploaded, you can then opt to generate an Audio Overview, which will provide a "deep dive" on the source material. These overviews are created by none other than Gemini, and it's not just a quick summary of your material in audio format -- it's a full-blown podcast with two "hosts" that break down complex topics into easy-to-understand pieces of information. They're incredibly effective, too, often asking each other questions to dismantle certain topics. The default length of an Audio Overview will vary depending on how much material there is to go over and the complexity of the topic -- though I'm sure there are other factors at play. In my testing, a very short piece of text created a five-minute audio clip, whereas two lengthier and more dense Google Docs documents I uploaded created an 18-minute Overview. If you want a little more control on the length for your Audio Overview, you're in luck. Announced in a blog post during Google I/O earlier this month, users now have three options to choose from: shorter, default and longer. This is perfect if you either want to have a short and dense podcast-like experience of if you want to get into the nitty gritty on a subject with a longer Audio Overview. You can interact with your AI podcasters It gets even better. Last December, NotebookLM got a new design and new ways to interact with Audio Overviews. The customize button allows you to guide the conversation so that key points are covered. Type in your directive and then generate your Audio Overview. Now, if you want to make this feature even more interactive, you can choose the Interactive mode, which is still in beta, to join the conversation. The clip will play, and if you have a particular question in response to something that's said, you can click the join button. Once you do, the speakers will pause and acknowledge your presence and ask you to chime in with thoughts or questions, and you'll get a reply. I wanted to try something a little different, so I threw in the lyrics of a song as the only source, and the AI podcast duo began to dismantle the motivations and emotions behind the words. I used the join feature to point out a detail in the lyrics they didn't touch on, and the two began to dissect what my suggestion meant in the context of the writing. They then began linking the theme to other portions of the text. It was impressive to watch: They handled the emotional weight of the song so well, and tactfully at that. Video Overviews The Video Overviews feature starting reaching users in late July, and is still rolling out at the moment -- of the three Google accounts I used NotebookLM with, I only have Video Overviews available in one of them. The new feature creates an animated visual aid to go alongside your Audio Overview. For now, Google says that Video Overviews will start out as slideshows, which suggests that more types of these overviews will be available in the future. A Veo-powered Video Overview wouldn't be a completely surprising addition in future. To test out the feature, I grabbed 1,600 or so words from the Odyssey. It took nearly 20 minutes for the overview to generate -- and even then, it wasn't actually ready. When I clicked the play button, NotebookLM automatically went back to "Generating Video Overview... This may take a while." and it stuck there so long I decided to delete the entire notebook and start over. Unfortunately, the second attempt also seemed to get stuck, too, but I could have just been impatient while the overview was being processed. I cut the word count by half, and sure enough, this sped up the overview generation significantly, and I was watching the slideshow within 5 or 6 minutes. The current version of Video Overviews available are fine, but they aren't anything to write home about. There will be visual aids, but the one that was generated based on the text from the Odyssey was largely an Audio Overview with slides of quotes from the source and didn't add too much to the experience overall. I have little doubt that this will change in the future, but the current version of Video Overviews feels more like a slightly upgraded version of Audio Overviews rather than its own thing yet. Here's an example of the current Video Overview format. Mind Maps Generating a Mind Map is just one of several powerful features from NotebookLM. Google/Screenshot by CNET I'd heard interesting things about NotebookLM's Mind Map feature, but I wanted to go in blind when I tried it out, so I did a separate test. I took roughly 1,500 words of Homer's Odyssey and made that my only source. I then clicked the Mind Map button, and within seconds, an interactive and categorical breakdown of the text was displayed for me to poke around in. Many of the broken-down sections had subsections for deeper dives, some of which were dedicated to single lines for dissection. Clicking on a category or end-point of the map will open the chat with a prefilled prompt. I chose to dive into the line, "now without remedy," and once clicked, the chat portion of NotebookLM reopened with the prefilled prompt, "Discuss what these sources say about Now without remedy, in the larger context of [the subsection] Alternative (worse)." The full line was displayed, including who said it, what it was in response to and any motivations (or other references) for why the line was said in the text. Public and featured notebooks Initially, notebooks were bound only to your account, but Google added the option to share your notebook with people or make them entirely public and sharable via a link. While it's a simple addition, it opens up the doors for collaboration if you're working on a notebook with someone else, as you can provide edit or view-only access. For the latter, a teacher could create a study guide on a particular subject for an exam or homework assignment to share with a class. The introduction of public and sharable notebooks paved the way for another feature that Google dropped in July: featured notebooks. Publicly available to anyone, featured notebooks come from publications, authors and researchers that cover a variety of topics. The list is limited to only eight notebooks at the moment, but more will come over time. Study guides and more If the combination of all that Audio Overviews and Mind Maps could do sounds like everything a student might need for the perfect study buddy, NotebookLM has a few other features that will solidify it in that place. Study guides After you've uploaded a source, you can create a quick study guide based on the material that will automatically provide a document with a quiz, potential essay questions, a glossary of key terms and answers for the quiz at the bottom. And if you want, you can even convert the study guide into a source for your notebook. FAQs Whether you're using it for school or want to create a FAQ page for your website, the NotebookLM button generates a series of potentially common questions based on your sources. Timeline If you're looking for a play-by-play sort of timeline, it's built right in. Creating a timeline for the Odyssey excerpt broke down main events in a bulleted list and placed them based on the times mentioned in the material. If an event takes place at an unspecified time, it will appear at the top of the timeline, stating this. A cast of characters for reference is also generated below the timeline of events. Briefing document The briefing document is just what it sounds like, giving you a quick snapshot of the key themes and important events to get someone up to speed. This will include specific quotes from the source and their location. A summary of the material is also created at the bottom of the document. Small additions add up Big features like Audio Overviews tend to get all of the attention, but Google's recently rolled out a couple of smaller additions or lifted previous restrictions that make NotebookLM accessible to more people and easier to use. NotebookLM had been restricted to people aged 18 or older, but Google has changed this restriction so younger people can use the tool to help them learn and study with schoolwork. Now, NotebookLM can be used by anyone ages 13 or older, though some countries may have different age restrictions. NotebookLM also recently added in one of its most requested features: bulk URL uploads. This sounds like a simple and small addition, but it's surely to be a time saver, as previously you could only add website URLs one at a time. How NotebookLM became an indispensable tool for me I already really liked NotebookLM's concept and execution during its 1.0 days, and revisiting the new features only strengthened my appreciation for it. My testing was mostly for fun and to see how this tool can flex, but using it when I "needed" it helped me really get an idea of how powerful it can be, even for simple things. During a product briefing, I did my typical note-taking: Open a Google Doc, start typing in fragmented thoughts on key points, and hope I could translate what I meant when I needed to refer back to them. I knew I would also receive an official press release, so I wasn't (too) worried about it, but I wanted to put NotebookLM to the test in a real-world situation when I was using it for real -- and not just tinkering, when nearly anything seems impressive when it does what you tell it to. I decided to create a new notebook and make my crude notes (which looked like a series of bad haikus at first glance) the only source, just to see what came out on the other end. Not only did NotebookLM fill in the blanks, but the overall summary read almost as well as the press release I received the following day. I was impressed. It felt alchemical -- NotebookLM took some fairly unintelligible language and didn't just turn it into something passable, but rather, a pretty impressive description. Funny enough, I've since become a more thorough note-taker, but I'm relieved to know I have something that can save the day if I need it to. If you need more from NotebookLM, consider upgrading Most people will likely never have the need to pay for NotebookLM, as the free version is robust enough. That said, if you need more, you can upgrade for additional features. Upgrading NotebookLM will provide everything from the free version, along with: 5x more Audio Overviews, Video Overviews, notebooks, queries, and sources per notebook. Access to premium features such as chat customization, advanced sharing and notebook analytics. For more, don't miss Google's going all-in on AI video with Flow and Veo 3.

Don't worry, Samsung's Android XR headset is still launching this year
Don't worry, Samsung's Android XR headset is still launching this year

Android Authority

time31-07-2025

  • Android Authority

Don't worry, Samsung's Android XR headset is still launching this year

Lanh Nguyen / Android Authority TL;DR During its most recent earnings call, Samsung has re-confirmed that its Project Moohan Android XR headset is launching 'this year.' Samsung has previously hinted at a 2025 release date, and this is the latest confirmation that it's still on track. An earlier report suggested Project Moohan could be released as early as October. Samsung has had a busy 2025, launching the Galaxy S25 series at the start of the year and most recently releasing the Galaxy Z Fold 7, Z Flip 7, and Galaxy Watch 8 series. But there's another Samsung gadget that's still on track to be released this year, and it's arguably the company's most interesting: its Project Moohan Android XR headset. Samsung has remained pretty tight-lipped about Project Moohan since the headset was first teased in January, though it has repeatedly insisted that the headset is launching in 2025. But as the months roll on and there's still no sight of it, doubt has begun to creep in. Thankfully, Samsung is committing to getting its Android XR headset on store shelves before the end of 2025. In the company's latest earnings call on July 30, Samsung confirmed that Project Moohan will still launch 'this year.' The full quote reads as follows: 'Meanwhile, we are also preparing to introduce next-generation innovative products, including our XR headset and TriFold smartphone this year. Our XR headset, which seamlessly integrates the XR ecosystem developed in partnership with Google as well as multimodal AI capabilities will serve as a key stepping stone in solidifying our leadership in future technologies and further expanding the Galaxy ecosystem.' Lanh Nguyen / Android Authority Although Samsung didn't get specific about when 'this year' we'll see Project Moohan, previous reporting has suggested it could be sooner than you might expect. In June, one report claimed that Samsung would hold a Project Moohan launch event on September 29 this year. The headset would then reportedly launch on October 13 in South Korea, with availability in other markets (such as the US) following at a later date. For a device set to launch within the next five months, there's a lot we still don't know about Samsung's first Android XR headset. What kind of first-party XR experiences is Samsung crafting for it? How long will the battery last? What's the display resolution? And, perhaps most importantly, how much will it cost? Oh, and what's it actually going to be called? The good news is that we should have all of those answers sooner rather than later. Follow

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store