logo
Google may have updated its logo for first time in nearly 10 years

Google may have updated its logo for first time in nearly 10 years

Time of India12-05-2025

has reportedly changed its logo and is debuting a refresh of its recognisable multi-colored 'G' icon, marking the first visual update to the symbol in nearly a decade. The change is said to have begun to appear on the company's mobile applications on both Android and iOS, introducing a subtle but noticeable shift in design.
Tired of too many ads? go ad free now
The current circular 'G' icon was initially introduced on September 1, 2015, as part of a broader redesign that saw Google update its main six-letter wordmark to a modern, sans-serif typeface called Product Sans. Prior to that, the 'G' icon featured a lowercase white 'g' set against a solid blue background.
The 'new' Google logo: What has changed
According to a report by 9to5google, the updated icon moves away from the distinct, solid colour segments that have characterised the 'G' for the past ten years.
Instead, the new design features a subtle blending effect, with the red section bleeding into the yellow, the yellow into the green, and the green flowing into the blue.
The report goes on to say that the updated 'G' icon is currently in use by the Google Search app for iOS. On Monday (May 12), the change also arrived on the Android platform with the beta version of the Google app (version 16.18).
While the change is rolling out, it is a relatively subtle alteration that users might not immediately notice, especially.
As of the time of writing, checks by members of the Time of India-Gadgets Now team confirmed that they had not yet received the update featuring the refreshed 'G' icon on their devices, suggesting a phased rollout.
Moreover, the report goes on to say that Google may not be simultaneously refreshing its main six-letter 'Google' wordmark. It also remains unclear whether this new blending style will be applied to other product logos that currently use the company's four-colour scheme, such as Chrome or Maps, although the design concept could theoretically be applied to their multi-sectional icons with relative ease.
There have been reports which claim that Google plans to announce
Material Design 3
in
Android 16
at upcoming Google I/O, and it is speculated that an official announcement may be made at that time.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta introduces V-JEPA 2, an AI world model to power robotics and autonomous systems
Meta introduces V-JEPA 2, an AI world model to power robotics and autonomous systems

Indian Express

time30 minutes ago

  • Indian Express

Meta introduces V-JEPA 2, an AI world model to power robotics and autonomous systems

It seems the AI community is gearing up for the next frontier in AI, world models. Meta, on Wednesday, June 11, unveiled its new AI model, V-JEPA 2. Dubbed as a 'world model', the V-JEPA 2 has the ability to understand the physical world. The model has been designed to comprehend movements of objects and has the potential to enhance robotics and self-driving cars. The V-JEPA 2 is an open-source AI model that can understand and predict real-world environments in 3D. It allows AI to build an internal simulation of the real world, essentially helping it reason, plan, and act much like humans. While a traditional AI model would rely heavily on labelled data, the V-JEPA 2 is reportedly trained to identify patterns in unlabelled video clips, using these as its foundation for internal 3D reasoning. The world model highlights the tech giant's increasing focus towards more intuitive and intelligent AI systems that can engage with the physical world. Reportedly, this technology can be beneficial in the domains of robotics, augmented reality, and future AI assistants. 'Today, we're excited to share V-JEPA 2, the first world model trained on video that enables state-of-the-art understanding and prediction, as well as zero-shot planning and robot control in new environments. As we work toward our goal of achieving advanced machine intelligence (AMI), it will be important that we have AI systems that can learn about the world as humans do, plan how to execute unfamiliar tasks, and efficiently adapt to the ever-changing world around us,' Meta wrote in its official blog. The latest announcement from Meta comes at a time when the company is facing stiff competition from rivals Google, Microsoft, and OpenAI. According to a recent CNBC report, Meta CEO Mark Zuckerberg has made AI a top priority for the company, which is also planning to invest $14 billion in Scale AI, a company that pioneers data labelling for AI training. When it comes to the specifications, the V-JEPA 2 is a 1.2 billion-parameter model that has been built using the Meta Joint Embedding Predictive Architecture (JEPA) model which was shared in 2022. V-JEPA is Meta's first model trained on video that was released in 2024, with the latest V-JEPA 2 the company claims to have improved action prediction and world modelling capabilities which allows robots to interact with unfamiliar objects and environments to accomplish a task. In simple words, world models are mental simulations that help us in predicting how the physical world behaves. We humans develop this intuition right from a young age, such as we know instinctively that a ball thrown in the air will fall back down. Similarly, while walking in a crowded space we avoid colliding with others. This inner sense of cause and effect helps us to act more effectively in complex situations. When it comes to AI agents, they need similar capabilities to interact with the real world. Accordion to Meta, to achieve this their world models should be capable of understanding their surroundings and recognise objects, actions, and movements; they should be able to predict how things will change over time, especially in response to actions; they should plan ahead by simulating possible outcomes and choosing the best course of action. So to simplify, an AI world model is an internal simulation that helps a machine to understand, predict, and plan within a physical environment. Essentially, it helps the AI to anticipate how the world will change in response to actions. Now, this could enable more intelligent, goal-driven behavior in AI. The V-JEPA 2 model could likely enhance real-world machines like self-driving cars and robots. For instance, self-driving cars would need to understand their surroundings in real time to move about safely. While most AI models depend on massive amounts of labelled data or video footage, V-JEPA 2 reportedly uses something known as simplified 'latent' space to reason about how an object moves or interacts. According to Meta's Chief AI scientist, Yann LeCun, a world model is an 'abstract digital twin of reality' that allows AI to predict what will happen next and plan accordingly. It is a big leap towards making AI more useful in the physical world. In one of his recent presentations, LeCun stated that helping machines understand the physical world is different from teaching them language. World models, which are a recent phenomenon, are gaining attention in the AI research community for bringing new dimensions other than large language models used in tools like ChatGPT and Google Gemini. In September 2024, noted AI researcher Fei-Fei Li raised $230 million for her startup World Labs, which focuses on building large-scale world models. On the other hand, Google DeepMind is also developing its own version of a world model named Genie which is capable of simulating 3D environments and games in real time.

AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next
AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next

Time of India

time6 hours ago

  • Time of India

AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next

Note: AI-generated image The tech world is changing fast, and it's all thanks to Artificial Intelligence (AI). We're seeing amazing breakthroughs, from chatbots that can chat like a human to phones that are getting incredibly smart. This shift is making us ask bigger questions. It's no longer just about "what can AI do right now?" but more about "what will AI become, and how will it affect our lives?" First, we got used to helpful chatbots. Then, the idea of a "super smart" AI, called Artificial General Intelligence (AGI), started taking over headlines. Companies like Google , Microsoft , and OpenAI are all working hard to make AGI a reality. But even before AGI gets here, the tech world is buzzing about Agentic AI . With all these new terms and fast changes, it's easy for most of us who aren't deep in the tech world to feel a bit lost. If you're wondering what all this means for you, you're in the right place. In this simple guide, we'll answer your most important questions about the world of AI, helping you understand what's happening now and get ready for what's next. What is AI and how it works? In the simplest terms, AI is about making machines – whether it's smartphones or laptops – smart. It's a field of computer science that creates systems capable of performing tasks that usually require human intelligence. Think of it as teaching computers to "think" or "learn" in a way that mimics how humans do. This task can include understanding human language, recognising patterns and even learning from experience. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Skype Phone Alternative Undo It uses its training -- just like humans -- in achieving its goal which is to solve problems and make decisions. That brings us to our next query: "How is a machine trained to do tasks like humans?" While AI might seem like magic, it works on a few core principles. Just like humans get their information from observing, reading, listening and other sources, AI systems utilise vast amounts of data, including text, images, sounds, numbers and more. What are large language models (LLMs) and how are they trained? As mentioned above, AI systems need to learn and for that, they utilise Large Language Models, or LLMs. They are highly advanced AI programmes specifically designed to understand, generate and interact with human language. Think of them as incredibly knowledgeable digital brains that specialise in certain fields. LLMs are trained on enormous amounts of text data – billions and even trillions of words from books, articles, websites, conversations and more. This vast exposure allows them to learn the nuances of human language like grammar, context, facts and even different writing styles. For example, an LLM is like a teacher that has a vast amount of knowledge and understands complex questions as well as can reason through them to provide relevant answers. The teacher provides the core knowledge and framework. Chatbots then utilise this "teacher" (the LLM) to interact with users. The chatbot is the "student" or "interface" that applies the teacher's lessons. This means AI is really good at specific tasks, like playing chess or giving directions, but it can't do other things beyond its programmed scope. How is AI helpful for people? AI is getting deeply integrated into our daily lives, making things easier, faster and smarter. For example, it can be used in powering voice assistants that can answer questions in seconds, or in healthcare where doctors can ask AI to analyse medical images (like X-rays for early disease detection) in seconds and help patients in a more effective manner, or help in drug discovery. It aims to make people efficient by allowing them to delegate some work to AI and helping them in focusing on major problems. What is Agentic AI? At its core, Agentic AI focuses on creating AI agents – intelligent software programmes that can gather information, process it for reasoning, execute the ideas by taking decisions and even learn and adapt by evaluating their outcomes. For example, a chatbot is a script: "If a customer asks X, reply Y." A Generative AI (LLM) is like a brilliant essay writer: "Give it a topic, and it'll write an essay." Agentic AI is like a project manager: "My goal is to plan and execute a marketing campaign." It can then break down the goal, generate ideas, write emails, schedule meetings, analyse data and adjust its plan – all with minimal human oversight – Just like JARVIS in Iron Man and Avengers movies. What is AGI? AGI is a hypothetical form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of intellectual tasks at a level comparable to, or surpassing, that of a human being. Think of AGI as a brilliant human polymath – someone who can master any subject, solve any problem and adapt to any challenge across various fields. While AI agents are created to take up specific tasks in which they learn and execute, AGI will be like a ' Super AI Agent ' that virtually has all the information there is in this world and can solve problems on any subject. Will AI take away our jobs and what people can do? There is a straightforward answer by various tech CEOs and executives across the industry: Yes. AI will take away repetitive, predictable tasks and extensive data processing, such as data entry, routine customer service, assembly line operations, basic accounting and certain analytical roles. While this means some existing positions may be displaced, AI will more broadly transform roles, augmenting human capabilities and shifting the focus towards tasks requiring creativity, critical thinking, emotional intelligence and strategic oversight. For example, AI/Machine Learning Engineers, Data Scientists , Prompt Engineers and more. The last such revolution came with the internet and computers which did eat some jobs but created so many more roles for people. They can skill themselves by enrolling in new AI-centric courses to learn more about the booming technology to be better placed in the future. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

OpenAI taps Google in unprecedented cloud deal despite AI rivalry
OpenAI taps Google in unprecedented cloud deal despite AI rivalry

Time of India

time7 hours ago

  • Time of India

OpenAI taps Google in unprecedented cloud deal despite AI rivalry

OpenAI plans to add Alphabet's Google cloud service to meet its growing needs for computing capacity, three sources told Reuters, marking a surprising collaboration between two prominent competitors in the artificial intelligence sector. The deal, which has been under discussion for a few months, was finalised in May, one of the sources added. It underscores how massive computing demands to train and deploy AI models are reshaping the competitive dynamics in AI, and marks OpenAI's latest move to diversify its compute sources beyond its major supporter Microsoft, including its high-profile Stargate data center project. It is a win for Google's cloud unit, which will supply additional computing capacity to OpenAI's existing infrastructure for training and running its AI models, sources said, who requested anonymity to discuss private matters. The move also comes as OpenAI's ChatGPT poses the biggest threat to Google's dominant search business in years, with Google executives recently saying that the AI race may not be winner-take-all. OpenAI, Google and Microsoft declined to comment. Since ChatGPT burst onto the scene in late 2022, OpenAI has dealt with increasing demand for computing capacity - known in the industry as compute - for training large language models, as well as for running inference, which involves processing information so people can use these models. OpenAI said on Monday that its annualised revenue run rate surged to $10 billion as of June, positioning the company to hit its full-year target amid booming adoption of AI. Earlier this year, OpenAI partnered with SoftBank and Oracle on the $500 billion Stargate infrastructure program , and signed deals worth billions with CoreWeave for more compute. It is on track this year to finalise the design of its first in-house chip that could reduce its dependency on external hardware providers, Reuters reported in February. The partnership with Google is the latest of several maneuvers made by OpenAI to reduce its dependency on Microsoft, whose Azure cloud service had served as the ChatGPT maker's exclusive data center infrastructure provider until January. Google and OpenAI discussed an arrangement for months but were previously blocked from signing a deal due to OpenAI's lock-in with Microsoft, a source told Reuters. Microsoft and OpenAI are also in negotiations to revise the terms of their multibillion-dollar investment, including the future equity stake Microsoft will hold in OpenAI. For Google, the deal comes as the tech giant is expanding external availability of its in-house chip known as tensor processing units, or TPUs, which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two OpenAI competitors launched by former OpenAI leaders. Google's addition of OpenAI to its customer list shows how the tech giant has capitalised on its in-house AI technology from hardware to software to accelerate the growth of its cloud business. Google Cloud, whose $43 billion of sales comprised 12% of Alphabet's 2024 revenue, has positioned itself as a neutral arbiter of computing resources in an effort to outflank Amazon and Microsoft as the cloud provider of choice for a rising legion of AI startups whose heavy infrastructure demands generate costly bills. Alphabet faces market pressure to demonstrate financial returns on its AI-related capital expenditures, which are expected to hit $75 billion this year, while maintaining its bottom line against the threat of competing AI offerings, as well as antitrust enforcement. Google's DeepMind AI unit also competes directly with OpenAI and Anthropic in a race to develop the best models and integrate those advances into consumer applications. Selling computing power reduces Google's own supply of chips while bolstering capacity-constrained rivals. The OpenAI deal will further complicate how Alphabet CEO Sundar Pichai allocates the capacity between the competing interests of Google's enterprise and consumer business segments. Google already lacked sufficient capacity to meet its cloud customers' demands as of the last quarter, Chief Financial Officer Anat Ashkenazi told analysts in April. Although ChatGPT holds a large lead over Google's competing chatbot in terms of monthly users and analysts have predicted it could reduce Google's dominant search market share, Pichai has brushed aside concerns that OpenAI will usurp Google's business dominance.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store