
Two Weeks Before Launch, Google Pixel 10 Pro XL Details Emerge
The Pixel 10 Pro XL specs also leak from multiple sources in the build-up to August 20. Google's rumored next flagship may sport a 6.8-inch AMOLED display with the Tensor G5 chipset and 5,200 mAh battery. Storage is set to start at 256 GB and max out at a whopping 1 TB, while memory should come in up to 16 GB of RAM.
Camera-wise, it's business as usual for the Pixel 10 Pro XL, but it's always reassuring to see performance in the rumor mill. There will be four rear cameras: 50-megapixel main camera, 48 MP ultra-wide camera, 48 MP telephoto camera, and an infrared temperature sensor. The rumored Pixel 10 Pro XL camera design is reportedly an evolution of previous models in the series, and it is clear from these leaks that Google intends to capitalize on Pixel 10 Pro XL design wins. The device is said to have rounded corners with lessened bezels in places, like around the display.
Google Pixel 10 Pro XL features hit earlier this month, and August 20 is just two weeks away, so very little will be left to unveil at the Made by Google 2025 event. However, these latest Google Pixel 10 Pro XL leaks give us the first truly close look at how the device could look, and they leave an impression of a phone Google can trust to deliver.
Whether the Google Pixel 10 Pro XL's price tag is more or less than previous models, it has a definite and desirable look, especially given that little changed with each iteration of Pixel phones in the past.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
12 minutes ago
- Indian Express
NASA and Google develop an AI medical assistant to be used by astronauts on deep‑space missions
Google and NASA are working on a medical assistant powered by artificial intelligence that could be used for extended trips to the space, starting with its Artemis campaign to return to the moon. The Crew Medical Officer Digital Assistant (CMO‑DA) is powered by Google AI trained on spaceflight literature. It's designed to support a crew medical officer or flight surgeon in keeping the flight crew healthy. The AI medical assistant provides real‑time analysis of crew health and performance, enabling medical decisions driven by data and predictive analytics. Google said the AI assistant is being tested with simulated scenarios and evaluated using a clinical framework designed to assess the performance of medical students. The model performed well in early tests with a diagnostic accuracy of 88 per cent for the ankle injury case, 80 per cent for ear pain, and 74 per cent for flank pain, according to a TechCrunch report. 'Early results showed promise for reliable diagnoses based on reported symptoms,' the blog reads. 'Google and NASA are now collaborating with medical doctors to test and refine the model, aiming to enhance autonomous crew health and performance during future space exploration missions.' The project is being implemented under a fixed contract with Google Public Sector, which includes cloud computing, infrastructure for application development, and model training. NASA owns the source code of the application and the agency will participate in the finalisation of the models. Vertex AI provides access to both Google models and third-party solutions. NASA plans to gradually expand the system's capabilities. Future versions will incorporate real‑time data from onboard medical devices and learn to detect spaceflight‑specific health conditions such as the effects of microgravity on the human body. Google Cloud and @NASA have collaborated on a new AI-powered proof of concept tool to help astronauts autonomously diagnose and treat symptoms during deep space missions, a significant step that could also benefit remote medical care on Earth. — Thomas Kurian (@ThomasOrTK) August 7, 2025 Both NASA and NASA are now working with doctors to refine the model, with the goal of improve autonomous medical care for future missions to the moon, Mars and beyond. The technology could also help deliver quality medical care to people in remote parts on Earth. This isn't the first NASA project to incorporate artificial intelligence. Earlier this year, the space agency's Jet Propulsion Laboratory successfully tested a new AI system called Dynamic Targeting, which allows Earth‑observing satellites to autonomously decide where and where not to point their cameras in under 90 seconds, without human intervention. Developed over more than a decade, this technology mimics how a human might interpret imagery.
&w=3840&q=100)

Business Standard
12 minutes ago
- Business Standard
Gemini Live gets real-time access to Google Calendar, Tasks and Keep apps
Google's Gemini Live now supports real-time integration with Calendar, Tasks, and Keep, enabling users to manage schedules, reminders, and notes directly in live chats on Android and iOS New Delhi Google has expanded Gemini Live's capabilities with real-time integration into Google Calendar, Tasks, and Keep for Android and iOS. The update allows users to manage schedules, reminders, and notes directly within live conversations. The integration, first teased at Google I/O 2025 in May, moves Gemini Live towards more personalised functionality by linking it with widely used Google apps. Users can now create Calendar events, set Task reminders, and add notes to Keep without leaving the chat interface. How it works The rollout, which began limited testing in late June, is now more widely available. It starts with integration for Google Maps, Calendar, Tasks, and Keep, enabling actions like adding events mid-chat or pulling location details instantly. On Samsung devices, Gemini Live also connects with Calendar, Notes, and Reminders. According to 9To5Google, when enabled, the interface shows the app name above fullscreen controls along with a loading indicator. Actions, such as creating a list, prompt confirmation messages and an 'Undo' option for quick edits. Users can reference apps directly — for example, 'Create a new task in Tasks' — or make general queries like 'Do I have any reminders today?' to trigger responses. The features work alongside Gemini Live's video and screen sharing, allowing, for instance, immediate event creation when dates are detected in the user's environment or on-screen. Rollout status While some users gained access in late June, the rollout has been gradual. The integration is now appearing in both stable and beta versions of the Google app on Android, as well as on iOS, broadening access to Gemini Live's personal data tools.


News18
an hour ago
- News18
Elon Musk's Grok AI Brings ‘Imagine' To Android For Free: What Is It And How It Works
Grok AI is getting a new image and video generation tool which is coming to more Android users and it is free. ChatGPT 5 launch has come with another interesting news for users earlier this month. Elon Musk's Grok AI chatbot is now making its Imagine AI feature free for everyone, and available to Android users as well. Grok Imagine is another AI tool that helps you generate videos and photos using text or even voice prompts. OpenAI and Google have their own tools that create videos and images and now Grok Imagine makes it even more interactive, and let you convert still images into moving visuals. Musk has claimed over 44 million images have been created by Grok Imagine and it continues to grow. To use the Grok Imagine feature, you need to install the Grok app from the Play Store for Android and App Store for iPhone users. Grok AI Free Video AI Generation Once you have generated the image, Grok Imagine lets you make videos out of it and you have Make video option that does the work. You get four modes in the app called Normal, Fun, Custom and a rather controversial Spicy mode. Grok is clearly going up against ChatGPT in the market and the latest GPT 5 launch has brought about comparisons with these two AI models. Musk has been publicly sharing his views on the new model and how it compares with their own Grok 4 AI model which is available via X and Grok AI app on mobile and web. He said in a post on X that Grok 5 will be out before the end of 2025 and it will be 'crushingly good'. Grok 4 was released in July and xAI called it the world's most powerful AI model and that was before GPT 5 came out. GPT 5 is promising a host of improvements over GPT4, and the biggest is the lower rate of hallucinations, something we are yet to see from Grok. We are eager to see if the Grok 5 version does manage to upgrade in these aspects. view comments First Published: August 11, 2025, 10:37 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.