
Google is fixing Meet's messy invitations on Android (APK teardown)
TL;DR Google is working on a simpler method to add new people to an ongoing call in Meet for Android.
It could replace the current method, which involves sharing the call link with people.
With the new functionality, you would be able to invite contacts by using their email addresses or phone numbers.
It's been a few years since Google Meet was merged with Duo for a more integrated video calling experience on mobile. While the new app has many exciting features, such as augmented reality filters for video calls, certain basic functionalities, like adding more people to existing calls, can be cumbersome and not user-friendly. Thankfully, Google may now be addressing this issue and introducing an easier way to add contacts to ongoing calls.
Authority Insights story on Android Authority. Discover
You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else.
An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release.
Currently, Google Meet on Android limits how you can invite people to ongoing calls. The existing method requires you to share a link with the new participant, which they can click and enter the meeting. This is unlike the web interface, where you can simply type their email ID to invite them to the call.
However, we have learned that Google is working to bring an experience similar to that from the web interface to its Android app. By tinkering with Google Meet's version 317.0.786350680 on Android, we enabled an 'Add others' button that makes it much easier to add participants.
Existing method to invite more people
Upcoming method to invite more people
Contact selection page for adding new participants
We also witnessed the workflow of adding more people to a Google Meet call. After tapping the 'Add others' button, a contacts page appears where you can enter the new participant's email ID or phone number, or choose it from the phone's contact list. A 'Call' button also appears at the top-right corner of this page.
After tapping 'Call,' the interface switches to the 'People' tab, showing a list of both current participants and invited users. Simultaneously, the call window minimizes into a picture-in-picture view. The call should appear as a standard incoming call for the invited individuals, while their names are shown with a 'Calling' label on your screen. If they decline or don't answer, the label changes to 'No answer,' and a new button appears. While it seems to allow you to call them again, it's more likely intended for existing participants to leave a video message together.
Since it is still under development, we're unsure when Google will make the functionality widely accessible to users. However, we expect it to happen soon. Notably, Google Hangouts, a now-deprecated video calling app by Google, also allowed users to invite people directly into the call on both the web interface and the Android app. So, it's a shame Meet doesn't do this yet.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
2 hours ago
- CNET
AI Is Taking Over Your Search Engine. Here's a Look Under the Hood
For decades, the way we find information on the internet changed only in small ways. Doing a traditional Google search today doesn't feel all that different from when, in the 1990s, you would Ask Jeeves. Sure, a lot has changed under the hood, the results are likely far more relevant and the interface has some new features, but you're still typing in keywords and getting a list of websites that might hold the answer. That way of searching, it seems, is starting to go the way of AltaVista, may it rest in peace. In May, Google announced the rollout of its new AI Mode for search, which uses a generative AI model (based on the company's Gemini large language model) to give you conversational answers that feel a lot more like having a chat and less like combing through a set of links. Other companies, like Perplexity and OpenAI, have also deployed search tools based on gen AI. These tools, which merge the functionality of a chatbot and a traditional search engine, are quickly gaining steam. You can't even escape AI by doing just a regular Google search: AI Overviews have been popping up atop those results pages since last year, and about one in five searches are now showing this kind of summary, according to a Pew Research Center report. I'm surprised it's not even more than that. These newfangled search tools feel a lot like your typical chatbot, like ChatGPT, but they do things a little differently. Those differences share a lot of DNA with their search engine ancestors. Here's a look under the hood at how these new tools work, and how you can use them effectively. Everything Announced at Google I/O 2025 Everything Announced at Google I/O 2025 Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:13 / Duration 15:40 Loaded : 6.33% 00:13 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 15:27 Share Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Everything Announced at Google I/O 2025 Search engines vs. AI search: What's the difference? The underlying technology of a search engine is kinda like an old library card catalog. The engine uses bots to crawl the vast expanses of the internet to find, analyze and index the endless number of web pages. Then, when you do a search to ask who played Dr. Angela Hicks on ER, because you're trying to remember what else you've seen her in, it will return pages for things like the cast of ER or the biography of the actor, CCH Pounder. From there, you can click through those pages, whether they're on Wikipedia or IMDB or somewhere else, and learn that you know CCH Pounder from her Emmy-winning guest appearance on an episode of The X-Files. "When customers have a certain question, they can type that question into Google and then Google runs their ranking algorithms to find what content is the best for a particular query," Eugene Levin, president of the marketing and SEO tool company Semrush, told me. Generally, with a traditional search, you have to click through to other websites to get the answer you're looking for. When I was trying to figure out where I recognized CCH Pounder from, I clicked on at least half a dozen different sites to track it down. That included using Google's video search -- which combs an index of videos across different hosting platforms -- to find clips of her appearance on The X-Files. Google announced AI Mode at its I/O developer conference in May. Google/Screenshot by Joe Maldonado/CNET These multiple searches don't necessarily have to happen. If I just want to know the cast of ER, I can type in "cast of ER" and click on the Wikipedia page at the top. You'll usually find Wikipedia or another relevant, trustworthy site at or near the top of a search result page. That's because a main way today's search algorithms work is by tracking which sites and pages get most links from elsewhere on the web. That model, which "changed the game for search" when Google launched it in the 1990s, was more reliable than indexing systems that relied on things like just how many times a keyword appeared on a page, said Sauvik Das, associate professor at Carnegie Mellon University's Human-Computer Interaction Institute. "There's lots of cookie recipes on the web, but how do you know which ones to show first?" Das said. "Well, if a bunch of other websites are linking to this website for the keywords of 'cookie recipe,' that's pretty difficult to game." AI-powered search engines work a little differently, but operate on the same basic infrastructure. In my quest to see where I recognized CCH Pounder from, I asked Google's AI Mode, literally, "Where do I recognize the actress who plays Dr. Angie Hicks on ER from?" In a conversation that felt far more like chatting with a bot than doing searches, I narrowed it down. The first result gave me a list of shows and movies I hadn't seen, so I asked for a broader list, which featured her guest appearances on other shows. Then I could ask for more details about her X-Files appearance, and that narrowed it down. While the way I interacted with Google was different, the search mechanisms were basically the same. AI Mode just used its Gemini model to develop and process dozens of different web searches to gather the information needed, Robby Stein, vice president of product for Google Search, told me. "A user could've just queried each of those queries themselves." Basically, AI Mode did the same thing I did, just a lot faster. So many searches, so little time The approach here is called "query fan-out." The AI model takes your request and breaks it down into a series of questions, then conducts searches to answer those components of the request. It then takes the information it gathers from all those searches and websites and puts it together in an answer for you. In a heartbeat. Those searches are using the same index that a traditional search would. "They work on the same foundation," Levin said. "What changes is how they pull information from this foundation." This fan-out process allows the AI search to pull in relevant information from sites that might not have appeared on the first page of traditional search results, or to pull a paragraph of good information from a page that has a lot more irrelevant information. Instead of you going down a rabbit hole to find one tiny piece of the answer you want, the AI goes down a wide range of rabbit holes in a few seconds. "They will anticipate, if you're looking for this, what is the next thing you might be interested in?" Levin said. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts The number of searches the AI model will do depends on the tool you're using and on how complicated your question is. AI Mode that uses Google's Deep Search will spend more time and conduct more searches, Stein said. "Increasingly, if you ask a really hard question, it will use our most powerful models to reply," Stein said. The large language models that power these search engines also have their existing training data to pull from or use to guide their searches. While a lot of the information is coming from the up-to-date content it finds by searching the web, some may come from that training data, which could include reams of information ranging from websites like this one to whole libraries of books. That training data is so extensive that lawsuits over whether AI companies actually had the right to use that information are quickly multiplying. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) AI search isn't just a chatbot Not relying on training data is one thing that sets an AI-powered search engine apart from a traditional chatbot, even though the underlying language model might be largely the same. While ChatGPT Search will scour the internet for relevant sites and answers, regular ChatGPT might rely on its own training data to answer your question. "The right answer might be in there," Das said. "It might also hallucinate a likely answer that isn't anywhere in the pre-training data." The AI search uses a concept called retrieval-augmented generation to incorporate what it finds on the internet into its answer. It collects information from a source you point it to (in this case, the search engine index) and tells it to look there instead of making something up if it can't find it in its training data. "You're telling the AI the answer is here, I just want you to find where," Das said. "You get the top 10 Google results, and you're telling the AI the answer is probably in here." Perplexity offers AI-powered search through its app and through a newly announced browser. Stefani Reynolds/Bloomberg via Getty Images Can you really trust AI search results? These AI-powered search tools might be more reliable than just using a chatbot itself, because they're pulling from current, relevant information and giving you links, but you still have to think critically about it. Here are some tips from the experts: Bring your human skepticism Consider how bad people are at telling when you're sarcastic on the internet. Then think about how bad a large language model might be at it. That's how Google's AI Overviews came up with the idea to put glue on pizza -- by pulling information from a humorous Reddit post and repeating it as if it were real culinary advice. "The AI doesn't know what is authentic and what is humorous," Das said. "It's going to treat all that information the same." Remember to use your own judgement and look for the sources of the information. They might not be as accurate as the LLM thinks, and you don't want to make important life decisions based on somebody's joke on an internet forum that a robot thought was real. AI can still make stuff up Even though they're supposed to be pulling from search results, these tools can still make things up in the absence of good information. That's how AI Overviews started creating fake definitions for nonsensical sayings. The retrieval-augmented generation might reduce the risk of outright hallucinations but doesn't eliminate it, according to Das. Remember that an LLM doesn't have a sense of what the right answer to a question is. "It's just predicting what is the next English word that would come after this previous stream of other English words or other language words," Das said. "It doesn't really have a concept of truthiness in that sense." Check your sources Traditional search engines are very hands-off. They will give you a list of websites that appear relevant to your search and let you decide whether you want to trust them. Because an AI search is consolidating and rewriting that information itself, it may not be obvious when it's using an untrustworthy source. "Those systems are not going to be entirely error-free, but I think the challenge is that over time you will lose an ability to catch them," Levin said. "They will be very convincing and you will not know how to really go and verify, or you will think you don't need to go and verify." But you can check every source. But that's exactly the kind of work you were probably hoping to avoid using this new system that's designed to save you time and effort. "The problem is if you're going to do this analysis for every query you perform in ChatGPT, what is the purpose of ChatGPT?" Levin said.


Business Insider
3 hours ago
- Business Insider
China Calls for Global AI Rules as U.S. Escalates Tech Fight – What Investors Should Watch
China is proposing to lead the creation of a new international body to shape the future of artificial intelligence. Speaking at the World Artificial Intelligence Conference in Shanghai, Premier Li Qiang called for a World AI Cooperation Organization, aiming to make AI development more inclusive and to prevent it from being dominated by a handful of nations or companies. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. The proposal comes as the global AI race accelerates. Premier Li cited the need for shared governance to address the risks tied to AI, from job losses to security concerns. Former Google (GOOG) chief executive Eric Schmidt backed the idea of global collaboration, saying the U.S. and China should work together to maintain stability and ensure human control over powerful AI systems. Tensions Rise as China Courts Allies and the U.S. Doubles Down However, turning that vision into a working framework will not be easy, as the U.S. is taking a different path. Just days before the conference, President Donald Trump signed new executive orders to ease regulations and boost energy access for AI infrastructure, including data centers. These moves are designed to strengthen companies like OpenAI and Google while reinforcing America's lead in advanced AI. In the meantime, geopolitical friction remains high. U.S. restrictions on Nvidia Corporation (NVDA) chips continue to limit China's access to high-end semiconductors. Premier Li acknowledged these supply chain issues and reaffirmed China's goal to reduce its reliance on foreign technology. That includes support for homegrown companies like DeepSeek, which has gained attention for scaling up open-sourced models and AI agents. China's strategy also includes outreach to the Global South, including partnerships with Brazil and African nations. However, international trust remains a hurdle. Western companies and governments are hesitant to align with a governance model led by Beijing, especially regarding concerns around data access, intellectual property, and dual-use technologies. Takeaway for Investors For investors, the gap between cooperation and competition is clear. Chinese firms are racing to set their own benchmarks, while U.S. players double down on domestic infrastructure and AI regulation. The idea of a global AI framework may gain traction diplomatically, but market dynamics suggest a more fragmented path forward. Whether this initiative reshapes AI development or becomes another diplomatic flashpoint will depend on how governments and companies balance access, risk, and control in the months ahead. Using TipRanks' Comparison Tool, we've analyzed several leading AI stocks that could be influenced by geopolitical tensions, shifting regulations, and broader market dynamics.


Android Authority
4 hours ago
- Android Authority
6 features from other skins I want on One UI
C. Scott Brown / Android Authority One UI has been with us for six years now, and it's easily the best Android skin Samsung has made. One UI is smoother, more reliable, and easier to use than Samsung Experience or TouchWiz, the skins that preceded it. Aside from a blip with One UI 7, it's been updated quicker than ever, often beating other Android skins. The features One UI delivers have made it my favorite flavor of Android since I first used it on my Galaxy S10 Plus, but there are still things I'd like to change. Motorola, OnePlus, and others have added exciting features to their Android skins, and I'd love to see some of them adopted by Samsung in the next version of One UI. Which of these featurs would you like to see most on Samsung phones? 1 votes Moto: Gestures 100 % Moto: Easier customisation 0 % Pixel: Now Playing 0 % Pixel: Call Screening 0 % OnePlus: Screenshot pixelation 0 % OnePlus: Open Canvas 0 % Motorola: Intuitive gestures It's hard to believe it's been twelve years since the original Moto X hit shelves. That phone, along with the Moto G, shaped Motorola's future over the following decade, and one of the best things it introduced is what the company now calls Kinetic Gestures. The ones I want most are the two that started it all — Fast Torch and Quick Capture. On a Motorola phone, performing a double karate chop toggles the torch on or off, something that's incredibly useful when you're fumbling with keys in the dark and need to add some light to the situation. I use this a lot on my 2023 Razr Plus, where this gesture is much faster than unlocking the phone and swiping through quick settings. Quick Capture opens the camera or switches between the front and rear lenses if the camera is already open. This gesture requires a double flick of the wrist, and once you get used to it, it's the easiest way of quickly launching the camera so you don't miss an important shot. Motorola: Easy customisation One UI has plenty of theming options, more than Motorola does, but it's all split across too many different menus and apps. Theming icons is in Theme Park, fonts are in the settings menu, and the Material You colors are in a menu accessed from the home screen. It's even worse on Samsung's Z Flip series, where all of the options for the cover screen are spread across even more menus. It adds too much friction to customising your Samsung phone. Motorola has gone about things in a simple, thoughtful way. All of the options for themes, icons, fonts, home screen grid sizes, and the cover screen are in a single place. Long-press the home screen, open the personalise menu, and there are all of the options you could ever need. Sure, I miss some of the more advanced tweaks from One UI, but Motorola's approach of listing everything together is more approachable for users. Pixel: Now Playing Ryan Haines / Android Authority Now Playing is one of those small features that you don't think about until you use a phone that doesn't have it. I used a Pixel 9 Pro as my daily phone recently, the longest I'd used a Pixel since I owned a 3XL, and it reminded me how many subtle quality of life features you get on Pixels. Knowing what song is playing in the background wherever I am, without having to ask my phone to do it, is more convenient than the alternative. The Now Playing history is great, too. I've planned to listen to a song that I've heard and searched for when out and about, only to forget what the song was by the time I get home. Now Playing keeps a history of the songs it hears, so I don't have to rely on my memory. Pixel: Call Screening Andy Walker / Android Authority Pixel 8a Call Screening debuted on the Pixel 3 in 2018, but I never got to experience it as it didn't come to the UK until 2021. When I used it with the Pixel 9 Pro recently, I couldn't believe how much better it is than Samsung's alternative. Samsung's Call Assist can do the basics of screening calls, transcribing the conversation, and live translation. But it's the extras that Google has added to Call Screening over the years that Samsung can't keep up with. Call Screening on Pixels can wait on hold for you and notify you when the person you're calling comes back, tell you how long the average wait time is for a call to a business, and even map and label phone tree options. Samsung is off to a good start with Call Assist, but there's a lot more work to be done. OnePlus: Screenshot pixelation Zac Kew-Denniss / Android Authority I take a lot of screenshots in this line of work, and I often need to censor things like my email, address, and other personal information. One UI does have a blur and pixelation tool in the gallery, but it's a manual process that can be quite messy if you don't have an S Pen to make things easier. OnePlus and Oppo devices have an AI-powered auto mode that applies a neat blur to what it identifies as sensitive information. It isn't perfect. In the example above, I had to censor two elements that it missed manually, but that was much less painful than having to do the whole thing myself, and features like this will only improve with updates. OnePlus: Open Canvas OnePlus debuted Open Canvas on the OnePlus Open, and since then, it's come to many of the company's devices. It's a new approach to window management on mobile that is more intuitive and makes the most use out of the space available on a screen. Before OnePlus introduced this, One UI had my favorite multitasking system, but Open Canvas blows it out of the water. Google has taken some inspiration from Open Canvas, adding a 90:10 split to multi-window that you can try in One UI 8 on the Fold 7, Flip 7, Flip FE, and the S25 series running the beta. It's an improvement, but still doesn't come close to what OnePlus is doing, and I'd love to try Open Canvas on a big screen, like my Galaxy Tab S10 Plus. One UI 9 needs to impress Zac Kew-Denniss / Android Authority One UI 8 feels like a stop-gap update. One UI 7 made a lot of big changes to Samsung software, most of which were welcome, but the delayed and fragmented update rollout left a bitter taste. One UI 8 is shaping up to have a much smoother release, but there's almost nothing new here to be excited about. It feels like One UI 8 received only minimal changes so that Samsung could push it out the door quickly and act as damage control for last year. One UI 9, whenever we see it, needs to give us something to be excited about, and looking to other OEM skins for inspiration, drawing on what Motorola, Google, and OnePlus users love about their phones, would be a good place to start.