logo
#

Latest news with #TechNews

Google Phone app is getting a visual makeover with Android 16's Material 3 Expressive
Google Phone app is getting a visual makeover with Android 16's Material 3 Expressive

Yahoo

timean hour ago

  • Business
  • Yahoo

Google Phone app is getting a visual makeover with Android 16's Material 3 Expressive

Material 3 Expressive design, for Android 16, has been spotted on the Phone by Google app. Google's Phone app gets larger elements, new buttons, and more. In-call "More" controls now appear as a pop-up menu. Android 16 is a big release, not just in terms of new features but also because of the overhaul of the operating system's Material Design language. Google is calling it Material 3 Expressive, and the company is already working on introducing the design language to some of its popular apps, including Calendar, Photos, Files, and Meet. It's safe to assume that the Mountain View tech giant will introduce Material 3 Expressive to all its Android apps to ensure design consistency in the operating system. While we're all excited to see how Material 3 Expressive transforms each of the Google apps on Android, we just got a solid look at what the Phone by Google app will look like with Android 16's design, courtesy of Android Authority's APK teardown of the app's version 177.0.763181107-publicbeta-pixel2024. The design makeover was spotted on the incoming call screen and in-call menu. The incoming call screen shows the rounded call button, which still supports the vertical swipe gesture for answering or declining calls. This could be seen as a major hint that the company has no plans to replace the vertical swipe with a horizontal swipe and simple tap-to-answer/decline buttons. Image source: Android Authority The in-call screen also shows a new animation for the profile picture of the caller. However, the animation disappears when you receive the call, with the screen showing the name, phone number, profile picture, buttons, and menu, all of which appear bigger than the current ones. The in-call screen is much more than changes in size. The shape of the in-call buttons also changed from round to oval. These buttons change shape to a rounded square upon pressing. We don't see any new buttons, but there is a noteworthy change in how the "More" menu appears. Currently, the "More" button reveals additional control options, including "Add call," Video call," and "Hold," all of which appear in the same container as the other buttons. But with Material 3 Expressive, the additional controls now appear in a pop-up style menu, appearing just above those buttons. Another major change we can spot is the redesigned reject call button, which is now pill-shaped and not rounded. Again, all these changes are currently going through the internal testing phase and are not available to general users. As much as we'd love to see them on the Phone app, there is no clarity about when they will be available. We expect the redesign to be available before Material 3 Expressive is rolled out to Pixel phones via a Feature drop later in the year. Phone by Google Google LLC TOOLS Price: Free 4.5 Download

Everything you need to know from Google I/O 2025
Everything you need to know from Google I/O 2025

Yahoo

time3 hours ago

  • Business
  • Yahoo

Everything you need to know from Google I/O 2025

From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago. But, we know what you're really here for: Product updates and new product announcements. Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year. We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them. Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at $19.99 per month, the Ultra plan comes in at a whopping $249.99 per month for its entire suite of products with the highest rate limits available. Google tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top. At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means. Google announces AI Mode in Google Search Credit: Google AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail. In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable. According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today. Out of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees. Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description. Google AI Shopping Credit: Google Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet. The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change. Google said users will be able to try on different looks via AI in Google Labs starting today. Google's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR. One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that. While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year. Project Moohan Credit: Google In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker. Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that. Easily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first. Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files. Jules coding agent Credit: Google Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages. The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts. Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer. Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S. Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs. Gemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too. Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward. Flow AI video tool Credit: Google A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue. Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2. Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform. Another big announcement out of Google I/O: Project Starline is no more. Google's immersive communication project will now be known as Google Beam, an AI-first communication platform. As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks. Google Meet translations Credit: Google Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display. While Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now. Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it. Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users. Gemini Live Credit: Google It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement. Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities. Another one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner. While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future. Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.

Microsoft Warns Windows Users—Emergency Update Within Days
Microsoft Warns Windows Users—Emergency Update Within Days

Forbes

time2 days ago

  • Business
  • Forbes

Microsoft Warns Windows Users—Emergency Update Within Days

Emergency update coming soon NurPhoto via Getty Images Another week, another emergency update for Windows users. Just days after it warned that May's security update was failing for some users, the company has confirmed it has been working the issue and an emergency update is coming soon, telling Windows users it 'plans to release an out-of-band update in the coming days.' While this affects Windows 11 users, there are echoes of the emergency update for Windows 1o users earlier in the month. Microsoft was quick to acknowledge that it was 'investigating reports of the May 13, 2025 Windows security update (KB5058405) failing to install on some Windows 11, version 22H2 and 23H2 devices.' As I reported when that news hit, an out-of-band update was the inevitable next step. If you're affected by the issue, you will see a recovery error warning that 'your PC/Device needs to be repaired,' and that 'the operating system couldn't be loaded because a required file is missing or contains errors.' Microsoft has explained that this driver issue and will likely display the error code: 0xc0000098. 'The file (Advanced Configuration and Power Interface) is a critical Windows system driver that enables Windows to manage hardware resources and power states.' The company has also warned that 'there are also reports of this same error occurring with a different file name.' While some physical devices have been impacted, most reports of this update failure concern virtual environments, 'including Azure Virtual Machines, ​Azure Virtual Desktop [and] on-premises virtual machines hosted on Citrix or Hyper-V.' That means it's far more likely to impact enterprise rather than home users. This is different to May's other emergency update, which addressed Windows 10 updates failing with a BitLocker Recovery screen when trying to install May's security update. 'Windows 10 might repeatedly display the BitLocker recovery screen at startup,' the company warned, confirming that other out-of-band update.

First look: Google's Phone app is getting a tasty Android 16 redesign (APK teardown)
First look: Google's Phone app is getting a tasty Android 16 redesign (APK teardown)

Android Authority

time3 days ago

  • Business
  • Android Authority

First look: Google's Phone app is getting a tasty Android 16 redesign (APK teardown)

Edgar Cervantes / Android Authority TL;DR An Android Authority teardown has revealed Material 3 Expressive design tweaks coming to Google's Phone app. The visual tweaks currently apply to the incoming call and in-call menus. This comes after we discovered visual changes coming to several other Google apps as well. Google is working on a visual overhaul of Android 16, using its Material 3 Expressive design. We've already spotted a few Google apps with similar tweaks, and we've now uncovered a major overhaul of Google's Phone app. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. We cracked open the Phone by Google app (version 177.0.763181107-publicbeta-pixel2024) and enabled the app's redesign. The visual tweaks apply to the incoming call and in-call menus. Check out the gallery below for a comparison. New UI New UI New UI New UI New UI Old UI Old UI Old UI Old UI Old UI The redesigned screens reflect the Material 3 Expressive style, featuring much larger contact names and caller photos. The redesigned app mostly eliminates simple circular buttons too in favor of larger, oval-shaped buttons that change shape when pressed. The answer call button still has the same circular icon, though, but the end call button is much larger and pill-shaped. There are several other smaller tweaks too. These include the omitted 'call from' text on the incoming call screen, the phone number being shown after you answer the call, and the redesigned 'more' menu in line with Material 3 Expressive. The Google Phone app also offers a little animation for your incoming caller's profile picture before you answer the call. Check out a slowed-down version of this and other app animations below. These Google Phone tweaks also come after we discovered Material 3 Expressive changes coming to the Google One, Google Meet, and Google TV apps. We expect plenty more Google apps to get visual changes in the coming months. In any event, we're glad to see Google making progress on redesigning its apps. But you won't necessarily need Android 16 to see these overhauled apps, as we're expecting these app redesigns to be available on earlier Android versions too. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Samsung Routines is learning some new tricks for One UI 8
Samsung Routines is learning some new tricks for One UI 8

Android Authority

time3 days ago

  • Business
  • Android Authority

Samsung Routines is learning some new tricks for One UI 8

Ryan Haines / Android Authority TL;DR One UI 8 just arrived in its first beta, and the release adds new actions to Samsung Routines. Clock actions expand with new options for alarms and stopwatches. Routines also picks up initial support for Calendar and Samsung Notes. This is a fun time to be a Samsung fan on Android. Not only is One UI 7 landing for more and more of the company's Galaxy lineup (including new midrange devices all the time), but the company just opened the doors on Android 16, inviting users to start playing around with its One UI 8 beta. We've been digging through it in search of any changes worth sharing with you, and we've just spotted some useful upgrades to Samsung Routines. Routines have got to be the easiest way to start making you feel like a power user. Just by setting some basic if/then relationships, you're able to automate all sorts of tasks on your phone. With One UI 7, we saw Samsung give us a bunch of new options for the actions supported in Routines, and now we've got a handful of further additions in One UI 8. The first we're looking at concerns some new clock-related actions. That screen you see on the left above reflects the options available in One UI 7, but on the right we see the extent to which Samsung's expanding that in One UI 8. Beyond just allowing Routines to turn alarms on or off, it picks up the ability to create new ones and edit existing ones. We similarly get some finer-grained stopwatch options. We're also seeing some all-new additions to Routines, with support for actions involving Calendar and Samsung Notes starting to arrive. For both notes and calendar events we get a similar selection of options, with the ability to search, display, edit, or create one new. This is a good start, and we definitely appreciate having even more tasks we're able to automate, but compared to the dozens of new actions we saw Routines pick up in One UI 7, this can't help but feel a little meager in comparison. That said, this is just our first taste of a One UI 8 beta, and Samsung has plenty of time ahead of it to further flesh its updates out. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store