Google Pixel 10 series launch likely in August: 5 things we expect
Google is working on the launch of the Pixel 10 series, and while the design may look similar to the Pixel 9 lineup, there are plenty of rumours about major hardware changes under the hood. Here's everything you need to know before the Pixel 10 series officially arrives.
The Pixel 10 lineup is rumoured to be launched at the upcoming 'Made by Google' event on August 13, 2025. Pre Orders may begin the same day and the devices may start to ship and hit store shelves within a week of release.
Google is set to unveil four devices in the Pixel 10 lineup, including Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL and Pixel 10 Pro Fold. According to leaks, the design is mostly unchanged from the Pixel 9 lineup.
Featuring flat edges, a horizontal camera bar and thin bezels around the display. The colour options are rumoured to include four shades: Obsidian (Black), Blue, Iris (Purple) and Limoncello (Yellow).
The biggest upgrade in the Pixel 10 lineup will be the new Tensor G5 chipset. According to rumours, the new Tensor G5 chipset is manufactured by TSMC itself instead of Samsung. This new Tensor G5 chip is expected to deliver significant improvements in performance and efficiency.
The new chip is rumoured to have a custom TPU (Tensor Processing Unit) for AI and on-device processing. The RAM may go up to 12GB but there's nothing about the storage yet.
The Pixel 10 base model is rumoured to come with a triple camera setup. A telephoto lens with 5X optical zoom will make its debut in the Pixel 10 model. Some leaks suggest that the main and ultrawide sensors may be borrowed from the Pixel 9a.
Google is expected to do this to compensate with advanced computational photography and new AI features. We may see a better stabilisation system and smarter AI-driven camera modes.
Pixel 10 is likely to launch with Android 16 right out of the box with multiple new AI features. We may also be able to see the new Material 3 expressive design across the software.
Google is expected to bring generative AI features like Video Generative ML for advanced video editing, Speak-to-Tweak for voice-based image editing and Sketch-to-Image. And this all will be processed on-device thanks to the new Tensor G5 chip.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
an hour ago
- Hindustan Times
Apple's chosen direction for all things incoming, and an AI regulation blueprint
I am not basing my take on announcements made by Apple at WWDC 2025 and indeed what it means for Apple's long-term AI stakes, on vibes (or something similar that the cool kids would cling to); but on simple facts. A lot was announced, encompassing all their software platforms, individual apps, taking forward the AI conversation and of course, tools for developers. Here's what you should know, in a crisp format. For those who are saying Apple is far behind the race that involves other AI companies including Google, OpenAI, Anthropic, Perplexity and others, you may have a point. That may be a reason why they don't want to be seen trying to compete in like-for-like situations. Trying to morph Siri into what Gemini is on Android, would have raised those exact questions. Instead, it has been a very smart AI pivot, one that in my opinion, focuses on specific use cases and therefore apps individually. Many of us have always craved for a touchscreen MacBook Pro or MacBook Air. That dream was never realised (and perhaps it'll never happen), but now an iPad Pro with a Magic Keyboard can very well be that touchscreen Mac. On the agenda for the iPadOS 26 update that rolls out later this year, is a new approach to multi-tasking, that is handling multiple apps and windows within the display real estate. That aspect gets further boosted by the introduction of an updated Files app, preview for managing PDF files and platform parity with Apple Intelligence introductions such as Live Translations. Or as Apple calls it, a 'biggest iPadOS release ever'. Even the gestures to switch between the app windows, the flick gesture, is very reminiscent of how things work on a Mac. As is the Files app, quite inspired by macOS' Finder. For Apple-based Macs, macOS 26 Tahoe marks the end of software updates. This well and truly closes the chapter of the Intel era of Macs, as far as Apple is concerned. That said, Intel-based Macs, supported in macOS Tahoe, will continue to receive security updates for 3 years. Just no new features or updates. For now, those still holding on to an Intel-chip powered Mac, will get the new design, Spotlight updates, Continuity features, and more with this Tahoe update later this year. We keep hearing time and again that AI regulation is all but inevitable. It may very well be. But there hasn't been much progress on that front. Except mere words. Odds are that an actual regulation may be round the corner, and even beyond that, some countries seem to be exhibiting greater intent at understanding how AI can find a place in their society. The United Arab Emirates (UAE) is one of those rare countries, where there is actual work towards understanding how AI fits into personal and professional spaces, hopefully without detriment. There are reasons why I feel more countries need to take note of the UAE's blueprint. The UAE's direction, led by widely considered to be the world's first AI Minister Omar Al Olama (appointed 2017), takes a balanced approach to AI regulation that prioritises innovation alongside responsibility, rather than implementing restrictive early-stage regulations.


Indian Express
2 hours ago
- Indian Express
Meta introduces V-JEPA 2, an AI world model to power robotics and autonomous systems
It seems the AI community is gearing up for the next frontier in AI, world models. Meta, on Wednesday, June 11, unveiled its new AI model, V-JEPA 2. Dubbed as a 'world model', the V-JEPA 2 has the ability to understand the physical world. The model has been designed to comprehend movements of objects and has the potential to enhance robotics and self-driving cars. The V-JEPA 2 is an open-source AI model that can understand and predict real-world environments in 3D. It allows AI to build an internal simulation of the real world, essentially helping it reason, plan, and act much like humans. While a traditional AI model would rely heavily on labelled data, the V-JEPA 2 is reportedly trained to identify patterns in unlabelled video clips, using these as its foundation for internal 3D reasoning. The world model highlights the tech giant's increasing focus towards more intuitive and intelligent AI systems that can engage with the physical world. Reportedly, this technology can be beneficial in the domains of robotics, augmented reality, and future AI assistants. 'Today, we're excited to share V-JEPA 2, the first world model trained on video that enables state-of-the-art understanding and prediction, as well as zero-shot planning and robot control in new environments. As we work toward our goal of achieving advanced machine intelligence (AMI), it will be important that we have AI systems that can learn about the world as humans do, plan how to execute unfamiliar tasks, and efficiently adapt to the ever-changing world around us,' Meta wrote in its official blog. The latest announcement from Meta comes at a time when the company is facing stiff competition from rivals Google, Microsoft, and OpenAI. According to a recent CNBC report, Meta CEO Mark Zuckerberg has made AI a top priority for the company, which is also planning to invest $14 billion in Scale AI, a company that pioneers data labelling for AI training. When it comes to the specifications, the V-JEPA 2 is a 1.2 billion-parameter model that has been built using the Meta Joint Embedding Predictive Architecture (JEPA) model which was shared in 2022. V-JEPA is Meta's first model trained on video that was released in 2024, with the latest V-JEPA 2 the company claims to have improved action prediction and world modelling capabilities which allows robots to interact with unfamiliar objects and environments to accomplish a task. In simple words, world models are mental simulations that help us in predicting how the physical world behaves. We humans develop this intuition right from a young age, such as we know instinctively that a ball thrown in the air will fall back down. Similarly, while walking in a crowded space we avoid colliding with others. This inner sense of cause and effect helps us to act more effectively in complex situations. When it comes to AI agents, they need similar capabilities to interact with the real world. Accordion to Meta, to achieve this their world models should be capable of understanding their surroundings and recognise objects, actions, and movements; they should be able to predict how things will change over time, especially in response to actions; they should plan ahead by simulating possible outcomes and choosing the best course of action. So to simplify, an AI world model is an internal simulation that helps a machine to understand, predict, and plan within a physical environment. Essentially, it helps the AI to anticipate how the world will change in response to actions. Now, this could enable more intelligent, goal-driven behavior in AI. The V-JEPA 2 model could likely enhance real-world machines like self-driving cars and robots. For instance, self-driving cars would need to understand their surroundings in real time to move about safely. While most AI models depend on massive amounts of labelled data or video footage, V-JEPA 2 reportedly uses something known as simplified 'latent' space to reason about how an object moves or interacts. According to Meta's Chief AI scientist, Yann LeCun, a world model is an 'abstract digital twin of reality' that allows AI to predict what will happen next and plan accordingly. It is a big leap towards making AI more useful in the physical world. In one of his recent presentations, LeCun stated that helping machines understand the physical world is different from teaching them language. World models, which are a recent phenomenon, are gaining attention in the AI research community for bringing new dimensions other than large language models used in tools like ChatGPT and Google Gemini. In September 2024, noted AI researcher Fei-Fei Li raised $230 million for her startup World Labs, which focuses on building large-scale world models. On the other hand, Google DeepMind is also developing its own version of a world model named Genie which is capable of simulating 3D environments and games in real time.
&w=3840&q=100)

Business Standard
5 hours ago
- Business Standard
Samsung assembled more mobile phones in India than Apple last year
Korean firm makes more phones across most segments and has higher volumes: S&P report Surajeet Das Gupta New Delhi Listen to This Article Samsung's global volume share of final assembling of smartphones in India might not get the same focus as that of Apple Inc because of the latter's aggressive export strategy and higher average selling price leading to higher value. But in terms of volume, the reality is that Samsung is much higher than its US rival. A research by S&P Global shows that Samsung's share of global final assembly volume of smartphones in India in 2024 was at 25 per cent compared to only 15 per cent of the Cupertino-based Apple Inc in the same period. For Samsung, its biggest exposure