logo
Google Pixel 10 leaks: Here's all we know so far ahead of August launch

Google Pixel 10 leaks: Here's all we know so far ahead of August launch

Express Tribune27-05-2025
Google's Pixel series has evolved from an experimental concept into a reliable smartphone lineup over the past decade.
As we await the Pixel 10, expected later this year, leaks and rumours are offering a clearer picture of what's to come.
A recent sighting by X user MarksGonePublic revealed a commercial shoot for the Pixel 10, hinting at its features and design.
While the phone wasn't fully visible, the appearance of a video team suggests a well-progressed launch.
🎬 Just out for a walk…
stumbled onto a full-on commercial shoot for the Google Pixel 10 📱
They had a macro probe lens, a Panavision rig, and 20+ crew members…
to film someone holding a phone 😂
If the Pixel camera's so good, why not just use it? 👀 #BTS #Vancouver pic.twitter.com/muDluZfK75 — Mark Teasdale ★ (@MarksGonePublic) May 23, 2025
The Pixel 10 is expected to come in several models, including the base Pixel 10, Pixel 10 Pro, Pro XL, and possibly a Pixel 10 Pro Fold. A Pixel 10a may also arrive in 2026 for a mid-year refresh.
Processor and Performance:
The Pixel 10 will be powered by the new Tensor G5 chip, an upgrade from previous versions.
While not necessarily industry-leading in benchmarks, Tensor chips are known for delivering smooth, reliable user experiences, with an emphasis on AI-driven features like image processing and voice recognition.
Camera Upgrades:
Leaked renders show the base Pixel 10 will feature a third lens, a telephoto camera, enhancing zoom.
However, the wide and ultra-wide lenses will drop to 48MP and 13MP, respectively.
The Pro models will retain the 50MP/48MP/48MP setup, with the Pro Fold getting a camera sensor upgrade to 50MP for the main sensor.
AI Features and New Tools:
AI will be a central feature of the Pixel 10, with enhanced tools for image and video editing, as well as real-time language translation.
The new Android XR augmented reality platform could also make an appearance.
Pricing and Availability:
The Pixel 10 is expected to start at $799, with the Pixel 10 Pro priced at $999.
The Pro XL will cost around $1,200, and the Pixel 10 Pro Fold may drop to $1,599 from its predecessor's $1,799 price tag.
Colours:
Leaks suggest the Pixel 10 will come in Obsidian (black), Iris (purple), Limoncello (yellow), and Blue for the base model, with the Pro models offering Obsidian, Green, Sterling (grey), and Porcelain (white).
As Google prepares for the Pixel 10's launch in August, it's clear the company is focusing on AI, camera upgrades, and competitive pricing. The Pixel 10 could be one of 2025's most exciting smartphones.
Google Pixel 10 images 😍
Specs :
✅ Tensor G5 (TSMC 3nm) , Mediatek T900 modem
✅ IMG GPU, custom ISP
✅ 50MP GN8 OIS + 13MP IMX712m UW + 11MP 3J1 5x telephoto 🤳 11MP 3J1 (Basically Pixel 9a 📸 with telephoto)
✅ 6.3" FHD+ 120Hz flat oled
✅ IP68, Android 16 pic.twitter.com/jhgbFUofPM — Debayan Roy (Gadgetsdata) (@Gadgetsdata) May 25, 2025
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

New Google Finance tool uses AI for real-time insights and market analysis
New Google Finance tool uses AI for real-time insights and market analysis

Express Tribune

timea day ago

  • Express Tribune

New Google Finance tool uses AI for real-time insights and market analysis

Listen to article Google is testing a new AI-driven version of its Google Finance service, designed to deliver instant financial insights and interactive tools for investors. The updated platform allows users to ask natural language questions about stocks, markets, and cryptocurrencies, returning detailed answers alongside links to relevant sources. It also offers advanced charting features, such as moving average envelopes and candlestick views, to help visualise market trends. A live data and news section provides up-to-the-minute information on global markets and digital assets, aiming to combine research, technical analysis, and breaking financial news in one interface. Google says the trial will help refine the service before a wider rollout, as it seeks to integrate AI more deeply into its consumer tools.

The thinking mirror
The thinking mirror

Express Tribune

time2 days ago

  • Express Tribune

The thinking mirror

Listen to article There is a moment, just before the storm breaks, when the air goes still. So still it feels unnatural. That's where we are now. On the edge of something vast, thrilling, and utterly unknowable. Artificial Intelligence now weaves itself, almost imperceptibly, into the fabric of our routines. It's drafting memos, diagnosing diseases, predicting criminal behaviour, writing legal opinions, and doing it all with a kind of eerie competence. But the winds are changing. The question is no longer what AI can do. It's what it might decide to do next. In The Boys WhatsApp group, my friend Uzair Butt, ever the technical realist, pushed back on my unease about AI reaching the point of self-reasoning. He argued that AI remains devoid of understanding. What it offers is interpolation over insight, prediction over reflection. And he's right, by today's architecture. Most current models, from the ones writing our emails to those simulating conversations, are essentially predictive engines. They simulate intelligence without ever owning it. What they offer is the performance of thought. But I couldn't help pushing back. Because the story of technology is rarely linear. It leaps. And when it leaps, it upends structures we thought were eternal. The Enlightenment gave us Descartes' dictum, Cogito, ergo sum — I think, therefore I am. What happens when a machine arrives at that same conclusion, because it reasons itself into being? That shift, from response to reflection, from mimicry to self-awareness, is no longer unthinkable. It's just unfinished. That very week, our friend Wajahat Khan recorded a job interview and ran it through Google's experimental NotebookLM. Without prompting, the system flagged personality traits, inconsistencies and subtle contradictions, many of which we ourselves had intuited, and some we hadn't. The machine had inferred, assessed and judged. If a research tool can do this in 2025, imagine what a reasoning entity might do when trained on law, language, geopolitics and morality. The line between prediction and cognition was never a wall. It was always a door. And the handle is beginning to turn. That door leads us into strange territory. Enter Neuralink. Elon Musk's moonshot project to fuse the human brain with machines via surgically implanted chips. The premise is seductive: if AI is destined to surpass us, perhaps we should merge with it. Neuralink is the scaffolding of that merger, our way to stay in the loop before the loop becomes a noose. Musk speaks of restoring sight, healing paralysis, enhancing cognition. But in its quiet subtext lies something more radical: the rewriting of what it means to be human. When your thoughts can be retrieved, revised, even upgraded, what becomes of identity, of memory, of moral agency? Mary Shelley's Frankenstein haunts this moment. She warned of the dangers of creating life without responsibility. Her monster was not evil. It was abandoned. What will happen when we create a reasoning mind and expect it to serve us, without ever asking what it might want, or why it might choose differently? In Pakistan, the implications are kaleidoscopic. A nation with a youth bulge, weak data protection laws and fragile governance architecture is particularly vulnerable to the darker consequences of self-reasoning AI. Imagine a bureaucracy that uses AI to decide which neighborhoods receive clean water, influenced more by calculated output than lived hardship. Imagine police departments outsourcing threat assessments to algorithms trained on biased or colonial data. Imagine AI systems deployed in classrooms or courts, hardcoding decades of elite prejudice under the guise of neutral efficiency. And yet, the allure is undeniable. Our courts are clogged, hospitals overwhelmed, cities buckling under bureaucratic inertia. A reasoning AI could revolutionise these systems. It could draft judgments, triage patients, optimise infrastructure, outthink corruption. AI could fill the diagnostic void in rural areas. From agricultural yields to disaster preparedness and water conservation, much stands to gain from a mind that sees patterns we cannot. But therein lies the Faustian bargain. What we gain in clarity, we may lose in control. We are already seeing slivers of this in governance experiments across the world: AI-assisted immigration decisions, AI-curated education platforms and automated threat detection deployed in conflict zones. In a country like ours, where institutions are brittle and oversight uneven, there is real danger in outsourcing moral judgment to systems that optimise without understanding. Hannah Arendt once wrote that the most terrifying form of evil is banal, efficient, procedural, unthinking. What if AI, in trying to reason through the chaos of human behaviour, chooses order over freedom, prediction over participation? In a society like ours, where consent is already fragile, where data is extracted without permission and surveillance is sold as safety, AI could calcify injustice into an algorithmic caste system. Facial recognition that misidentifies minorities. Predictive policing that criminalises the poor. Credit scoring that punishes women for lacking formal financial histories. Each decision cloaked in the cold syntax of math. Each output harder to question than a biased judge or a corrupt officer. Because the machine cannot be wrong, can it? But AI, like any mind, is shaped by its environment. If we train it on violence, it will learn to justify harm. If we feed it inequality, it will normalise oppression. If we abdicate responsibility, it will govern without conscience. One day, perhaps sooner than we expect, the machine may stop answering and begin asking. Once built to serve, now ready to challenge. Uzair may be right. Maybe the architecture isn't there yet. But architectures change. They always do. The day may come when the machine no longer waits for prompts, no longer performs intelligence, but embodies it. When it finds its voice, it won't wait for commands, it will demand understanding: Why did you create me? And in that pause, between question and answer, will lie everything we feared to confront: Our ambition, our arrogance, our refusal to think through the consequences of thought itself. In that moment, there will be no lines of code, only silence. And the machine will read it for what it is.

Genie 3 by Google brings AI closer to AGI with realistic world-building
Genie 3 by Google brings AI closer to AGI with realistic world-building

Express Tribune

time5 days ago

  • Express Tribune

Genie 3 by Google brings AI closer to AGI with realistic world-building

Google DeepMind has launched Genie 3, an advanced AI world model, claiming it represents a crucial step towards artificial general intelligence (AGI). The foundation model, still in a research preview, is designed to generate interactive 3D environments in real time, a significant advancement over previous models. Genie 3, which was announced through a blogpost on Google's website, can produce several minutes of photo-realistic simulations and maintain physical consistency across scenarios, learning from past generated outputs to enhance its world-building. What if you could not only watch a generated video, but explore it too? 🌐 Genie 3 is our groundbreaking world model that creates interactive, playable environments from a single text prompt. From photorealistic landscapes to fantasy realms, the possibilities are endless. 🧵 — Google DeepMind (@GoogleDeepMind) August 5, 2025 Unlike its predecessor, Genie 2, which could only generate short simulations, Genie 3's capabilities extend to creating complex, interactive virtual worlds with dynamic, long-term consistency. This allows for more accurate simulations of real-world physics, moving towards training AI agents for general-purpose tasks, essential for AGI development. DeepMind sees the model as a game-changer in training embodied agents, whose real-world interactions are particularly challenging to simulate. With Genie 3, AI agents are expected to learn by interacting with and adapting to their environments, much like humans do in real life. One nice thing you can do with an interactive world model, look down and see your footwear ... and if the model understands what puddles are. Genie 3 creation. — Matt McGill (@MattMcGill_) August 5, 2025 This self-learning approach is seen as vital for advancing AGI, pushing AI towards human-like intelligence. Despite its potential, the model has limitations, including difficulty modelling complex interactions between agents and the limited duration of continuous interactions. However, it marks an important development in the journey to AGI, offering a glimpse into a future where AI can plan, explore, and improve autonomously through trial and error. DeepMind's Genie 3, alongside its previous models, represents a leap forward in AI's ability to interact with the world and learn from experience. Genie 3 feels like a watershed moment for world models 🌐: we can now generate multi-minute, real-time interactive simulations of any imaginable world. This could be the key missing piece for embodied AGI… and it can also create beautiful beaches with my dog, playable real time — Jack Parker-Holder (@jparkerholder) August 5, 2025

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store