Google, Oppo, Moto and Honor are finally giving us the AI we deserve
I don't know about you, but I'm tired of the term AI. When every company in the industry is using the term, it loses its meaning, and I'm tired of more AI use cases that I barely use.
Artificial intelligence was meant to make our lives easier, but the first era of AI was all about generative uses. It has led to millions of means, can help anyone become an artist, and has led to a deepfake era that phone makers are also hoping to use AI to solve.
Yet, the lack of a use case that everyone desires has also meant we're about to enter the next era of agentic AI. The difference is key: the first era was about creating new content, and the second era is all about AI being used to make your life significantly easier.
I've just experienced five different examples of the future of AI, and it's clear that we're finally about to get the AI that we have been dreaming about.
Google is pushing AI on smartphones forward for the hundreds of millions of users who use Gemini or OEM partner AI solutions. The future of Google Gemini was on show at the Android Avenue at MWC 2025, and what I saw made me super excited.
First, AI is finally addressing one of the biggest issues since the advent of the smart home: getting things to work together properly. Instead of needing to build complex routines yourself, you can now prompt Gemini to do so for you.
The example used to demonstrate was of a dog sneaking a cookie from a kitchen counter. The demo showed the user first asking Gemini to review footage and find out whether it was the pet or the child. Then the demo asked Gemini to build a routine so the lights would come on, and a preset broadcast would go out to all speakers to stop this.
This is the type of AI feature that I can see my mother using, as is the preview of the new Gemini Live. In the coming weeks, all Gemini Advanced customers will be able to share a screen with Gemini, and mobile users will also be able to use their phone camera to ask questions about their surroundings.
The possibilities are endless with these new features, and I love that Google is also making them incredibly user-friendly. They're simple enough that anyone can use them.
Motorola didn't unveil any new hardware at MWC, but that didn't stop it from showing off a suite of new AI features that will transform how you use your phone.
The biggest of these is Smart Connect, which will transform the integration of all Android and Windows devices, not just those made by Motorola and its parent company, Lenovo. It also provides the best ecosystem integration I've seen, aside from Apple's.
Smart Connect allows you to link a laptop, tablet, and phone together to open a range of new features. These include easily transferring files and mirroring the screen. There are also some nifty AI tricks, such as being able to search images, documents, and files across all your linked devices and launching apps on a device using Moto AI on another device.
In particular, Lenovo and Motorola are taking a different approach to the one deployed by many tech companies. Instead of limiting the feature to just their devices, it's instead available to help any Android or Windows user. It's refreshing, and I hope it's a sign of the things to come in the industry.
One of the AI use cases I've been most excited about is real-time translation. I used to be great at learning new languages, but I need to refresh this skill. Yet, I travel a lot and have many meetings where translators are required. I've longed for a setup that made conversations seamless on the go, and Oppo, Honor, and Tecno revealed new solutions that could make this a reality.
Oppo launched the world's thinnest foldable — the Oppo Find N5 — two weeks before MWC, meaning its presence during the show was focused on its AI efforts in partnership with Google Gemini. The most exciting part of the AI Summit was the reveal of real-time translation during phone calls, which should roll out to users in the coming months. It's unclear whether this is just for Oppo phones, but it's a huge benefit for travel and work.
Honor also tackled the same problem using an entirely different approach. While Oppo's translation feature is only for phone calls, Honor's is designed to tackle the problem of live translation during conversations. Its solution? Its first open-ear earbuds, the Earbuds Open, have ANC and offer real-time translation; pass one to the other person, place one in your ear, and you can have a conversation with no one else needed.
Tecno took a third approach to solving the same problem but focused more on business meetings and less on portability. Its new MegaBook S14 computer features the ability to translate a conversation or business meeting in real time, allowing you to conduct business without the need for human translators. Particularly interesting is that it runs entirely on-device and offline, so it should even work if you need to have an impromptu conversation in the skies.
Each of these features has the ability to make your interactions with other humans much easier, but the next generation of AI is also designed to be the assistant that gets things done for you.
There's a lot of growing apps and services that offer some form of agentic artificial intelligence, or in simpler terms, the ability to do things like a human would. Consider the difference between AI right now and a human assistant: if you ask the latter to book a table, flight, or hotel with your set preferences, you're not likely to have to use multiple prompts to do so.
Alongside several new products and announcements — including its new AI-focused Alpha plan corporate strategy, 7 years of updates for its flagship smartphones, and several AI demos — Honor showed off its new AI Agent. It's a GUI-based mobile AI agent that can read your screen and perform task,s and the example above was used in the company's demo.
The actual user experience won't display all the steps, but the demo showed each of the steps it takes. For security reasons, you also have to confirm before the final step is executed. It can work with other third-party applications as well, and it can adapt to use your preferred app instead of the pre-programmed options.
Yes, it doesn't sound as cool as Agentic AI, but the future of AI seems to be about helping people solve complex problems or perform mundane or repetitive tasks. It's designed to help you live life on your terms, and each of these products will help you do that.
I've been in plenty of situations where real-time translation is needed, and recording to translate later hasn't always been feasible. Many people would like an assistant for tasks like booking hotels or restaurants just to save them time, while Motorola Smart Connect could have a huge impact on all Android and Windows users.
The biggest impact will be from Google Gemini. Google is at the forefront of Android efforts around AI, but it has thankfully recognised that supporting partners is the way to get AI in the hands of the masses.
A partnership that started with Samsung Galaxy AI has extended to also working closely with phone makers — including Motorola, Honor, and Oppo — to develop and build new AI experiences. I am a big fan of Apple Intelligence Notification Summaries, but in the same week that Apple delayed the next generation of Siri, it's telling that we're getting the AI we deserve from multiple Android phone makers working together.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat
Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat originally appeared on TheStreet. You typed in a question and clicked a few links, and Google could get paid if you landed on an ad. For years, that simple cycle helped turn Google into a trillion-dollar titan. But now, that model is under threat. 💵💰💰💵 AI-powered chatbots like OpenAI's ChatGPT are rapidly changing how people find answers. Instead of browsing through links, users are getting direct summaries on AI. These 'zero-click' searches quietly erode the economics that built the modern internet. The number of users is growing fast. OpenAI CEO Sam Altman said in April that ChatGPT already has 'something like 10% of the world" in terms of users, pegging the number closer to 800 million, Forbes reported. Even Google seems to know it. It's giving AI answers, called AI Overviews, right at the top of the page. "What's changing is not that fewer people are searching the that more and more the answers to Google are being answered right on Google's page. That AI box at the top of Google is now absorbing that content that would have gone to the original content creators," Cloudflare CEO Matthew Prince said in a CNBC interview. Alphabet () , Google's parent company, isn't showing any cracks just yet. In April, the company posted first-quarter revenue of $90.23 billion, topping Wall Street expectations. Earnings per share came in at $2.81, far above the forecasted $ the backbone of Google's business, brought in $66.89 billion, accounting for nearly three-quarters of total revenue. Its 'Search and other' segment rose almost 10% year over year, hitting $50.7 billion. Meanwhile, Google's own AI tools are starting to show traction. AI Overviews now has 1.5 billion users per month, up from 1 billion in October, the company said. So far, the numbers suggest that AI isn't cannibalizing Google's business yet. Bank of America remains bullish on Alphabet stock. The firm reiterated a buy rating and a price target of $200, which implies a potential 15% upside from current levels, according to a recent research report. The firm said in May, Google's global average daily web visits held steady at 2.7 billion, unchanged from the previous month and down 2% from a year earlier. ChatGPT, meanwhile, saw a 3% month-over-month increase to 182 million, marking a 105% jump the U.S., Google traffic slipped 2% year-over-year to 524 million daily visits, while ChatGPT surged 112% over the same period to 26 million. Although Google has highlighted the growing reach of its AI Overviews, analysts are uncertain whether it's translating into more traffic. 'So far, we are not seeing a lift in Google traffic from AI Overviews expansion, though we think the search experience is much improved,' the analysts wrote. The competition is real. Google's global search share also edged down in May, falling 8 basis points month-over-month and 123 basis points year-over-year to 89.6%, according to Statcounter. Still, Bank of America analysts remain optimistic on Alphabet stock. "While ChatGPT's traffic continues to grow rapidly, we think Google remains well-positioned given its scale, multi-product reach, data assets, and robust monetization infrastructure," the analysts said. "AI can expand overall search monetization by better understanding the intent behind complex and long-tail queries that were previously hard to monetize," they added. Morningstar's Malik Ahmed Khan echoed that sentiment, saying Alphabet's diverse revenue streams and global exposure should cushion any hits, even as regulatory and AI risks mount, according to a May research report. Alphabet stock closed at $174.92 on June 6. The stock is down 8% unveil bold forecast for Alphabet stock despite ChatGPT threat first appeared on TheStreet on Jun 6, 2025 This story was originally reported by TheStreet on Jun 6, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
an hour ago
- Forbes
Samsung Confirms Upgrade Choice—Galaxy Users Must Now Decide
This decision defines the future of your phone. Republished on June 7 with reports into Google's new decision for Android users. A timely warning from Samsung this week, which neatly sets out the biggest upgrade decision now facing Android users. As whispers start to spread suggesting a disconnect between Samsung and Google at the heart of Android, this is critical. We're talking AI and the new features and offerings now hitting phones and PCs at breakneck speed. This is where Galaxy has an advantage, Samsung says, 'in privacy-first, AI-powered experiences' which can 'protect you in the era of AI.' The question the Galaxy-maker asks in its latest post is the right one: 'This level of personalization' brought by AI 'can be incredibly helpful, but the more your phone knows, the more there is to protect. So, what's keeping all that personal data secure?' Samsung's answer is Knox. 'Every Galaxy device is protected from the chip up by a multi-layered approach, which includes on-device personalization, user-controlled cloud processing, and ecosystem-wide protection through Samsung Knox Matrix.' This is Samsung's secure ecosystem that is the closest replica to Apple's securely walled garden currently available on Android. 'At the core of this system is Samsung Knox Vault, Samsung's hardware-based solution for your most sensitive information.' Knox is not new and neither is the concept of hardware-enabled Galaxy data security. What is new is segmenting sensitive the latest AI-related data from the rest, and securing that alongside the more traditional PINs, passwords and credit card numbers. 'Location service metadata from your most personal photos,' Samsung says, 'could easily give away the exact location where the image was taken.' And there's not much data more sensitive than who did what, where and when. 'In the era of AI, personal information like your home address, face clustering ID, person ID, pet type, scene type and more need to be encrypted and stored in a safe location. These things aren't just files — they are deeply connected to your daily life.' It's unclear exactly what is being or will be segmented and how this plays into the various opt-ins that Samsung has added to distinguish between on-device and cloud AI, between what is only within your secure enclave and what is outside. But it's difficult not to read this push as a play against the latest announcements from Google and the cloud-based AI that will now run riot across sensitive data, including emails and even cloud data storage. Yes, there are always opt-outs, but it's all or nothing for users who want AI but are not yet worrying about privacy. 'As Galaxy AI becomes more useful,' Samsung says, 'it also becomes more personal — learning how you use your device and adapting to your needs… Knox Vault is more than a security feature, it's Galaxy's promise that no matter how advanced your devices become, or how much AI evolves, your privacy is secured.' Google, meanwhile, will not make this decision easy for Samsung user. No one is rolling out new smartphone AI innovations faster, and it will always overshadow what can be done if users take a privacy-centric, device-only approach. Per Android Police, the latest update is 'Google's Gemini replacing Google Assistant as the default AI assistant, taking on all digital assistance responsibilities as Assistant is phased out later this year. Gemini is gaining 'Scheduled Actions,' allowing users to automate recurring tasks and information delivery at specific times.' This is the stepping stone to so-called Agenctic AI on phones, where monitoring data and events and activities enables an agent to make decisions autonomously on a smartphone owner's behalf. This next step, with 'Scheduled Actions streamlining routines [and] offering personalized updates,' is just the start. As Mashable says, 'When combined with computer vision, which is what allows a model to 'see' a user's screen, we get the agentic AI everyone is so excited about… Agentic AI tools could order groceries online, browse and buy the best-reviewed espresso machine for you, or even research and book vacations. In fact, Google is already taking steps in this direction with its new AI shopping experience.' Allowing AI access to smartphones with all the data and insight they contain, pushed this to a level even beyond Windows's controversial Recall. It's decision time.


Tom's Guide
2 hours ago
- Tom's Guide
5 features iOS 26 needs to steal from Google to catch up on AI
I've been enjoying Google's AI features on my Pixel phones for the last couple of years. Starting with the Pixel 8 Pro and proceeding with the Pixel 9 Pro, Google has proven to me that its AI features in its Pixel phones are unmatched — and Apple's in trouble if it doesn't catch up. With WWDC 2025 right around the corner, it's Apple's chance to redeem itself by introducing more Apple Intelligence features for what's presumably going to be the next iteration of its phone software: iOS 26. While there's been a handful of useful AI features, such as Visual Intelligence and Photo Clean Up to name a few, iPhones could still stand to get more. In fact, there are a number of Google AI features I think Apple needs to copy that could boost the iPhone experience. I'm not saying outright steal the same exact features, but at least come up with something similar — or if not, better one. If there's one AI feature that Apple desperately needs to copy from Pixel phones, it has to be none other than Call Screen. Not only is it one of the most underrated AI features I've tried in any phone, but it's also one of the most helpful. Call Screen allows Pixel phones to take incoming calls on your behalf, using Google Assistant to listen to callers and then provide you with contextual responses on your phone to choose. Think of it like an actual assistant who's fielding the call for you and relaying your response. I can't tell you how many times it's been such a lifesaver when I'm stuck in a work meeting. Although it technically debuted with the Galaxy S25 Ultra, the cross-app actions function migrated to Pixel phones and it shows the impressive abilities of AI. While Apple Intelligence can call on Siri to perform simple actions, it doesn't have the ability to connect with third-party apps — which is exactly what makes cross-app actions such a big game changer with Pixel phones. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Through simple voice commands, it can work with several apps to complete a certain request. For example, you can ask Gemini on a Pixel phone to summarize an email or find a nearby restaurant that's pet friendly and add a calendar appointment for it. Another feature that debuted with Samsung and eventually made its way to Pixel phones is Circle to Search. Apple currently doesn't have anything like it, although you could argue that Visual Intelligence can effectively function in almost the same way. With Circle to Search, it's a quick and convenient way to perform searches directly on-device, from whatever app you're using. When activated, you simply circle or select what you're looking at on your phone's screen to perform a search — which could result in answering a question, performing a general Google Search, identifying something, and even finding deals on a product. One AI feature I've come to appreciate as a photo editor is the Pixel's Reimagine tool, which allows me to select parts of a photo and transform it into something else through a text description. The closest Apple Intelligence feature to this would be Image Playground, but that generates images from scratch through a text description — it doesn't work with existing photos. Reimagine helps to make existing photos look better, whether it's to change up the scene entirely or make minor edits. I personally love being able to select the sky in my photos and change it up to something else, or using Reimagine to insert different elements with realism. Even though it could benefit from a few enhancements, Pixel Screenshots can be better at helping you recall information you might forget — or need to remember for later on. It's exclusively available on the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold and lets you use the screenshot function and AI to recall details in them. For example, if you screenshot a pizza recipe you want to try for later, or the details about an upcoming party you're going to, Pixel Screenshots will allow you to perform a search to find the exact details about it. Apple doesn't have a comparable AI feature, but wouldn't it be neat if Apple Intelligence could recall the most obscure (or detailed) information that you go through on your iPhone.