logo
Google warns you about Gemini, what also applies to ChatGPT, Grok and all other AI chatbots: Do not …

Google warns you about Gemini, what also applies to ChatGPT, Grok and all other AI chatbots: Do not …

Time of Indiaa day ago

Google
has issued a stark warning to users of its
Gemini
AI assistant: do not share confidential information, as conversations may be reviewed by humans for up to three years. This privacy caution extends beyond Google's platform, highlighting a critical concern that applies to all major AI chatbots including ChatGPT,
Grok
, and others.
The warning comes as Google prepares to expand Gemini's access to Android users' phones, messages, and apps starting July 7, 2025. According to emails sent to users, Gemini will soon access Phone, Messages, WhatsApp, and Utilities applications regardless of whether users have enabled Gemini Apps Activity settings.
Human reviewers can access your AI conversations for quality control
Google's current privacy documentation reveals that when Gemini Apps Activity is enabled, data is stored for up to 18 months and may be reviewed by human moderators with personal identifiers removed. Even when the setting is disabled, data may still be retained for up to 72 hours for quality and security purposes.
This practice of human review is not unique to Google. Most major AI platforms employ similar quality control measures, making the confidentiality warning universally relevant. Companies like
OpenAI
, Anthropic, and others have acknowledged that human reviewers may examine conversations to improve AI performance and ensure safety compliance.
Privacy settings may not provide complete protection from AI data collection
The timing of Google's announcement has raised additional privacy concerns, as the expanded access will begin in less than two weeks. Users who wish to opt out can adjust settings through the Apps settings page, though Google has not provided clear instructions on the exact location of these controls.
The broader implication extends to all AI interactions: users should treat conversations with any AI assistant as potentially non-private. Whether discussing business strategies, personal matters, or sensitive information, the risk of human review means these platforms should not be considered secure channels for confidential communications.
This universal privacy principle applies regardless of the AI platform's promises about data protection.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google unveils Doppl, a new app that lets you try new clothes digitally
Google unveils Doppl, a new app that lets you try new clothes digitally

Indian Express

time37 minutes ago

  • Indian Express

Google unveils Doppl, a new app that lets you try new clothes digitally

Google launched a new app called Doppl on Thursday, June 26, that lets you try on new clothes from the comfort of your couch. Part of Google Labs, the tech giant's initiative which lets users try new and experimental features, the company said that the app 'makes it fun and easy to see any outfit on a digital, animated version of yourself.' Last month, Google Shopping announced that users will be able to try on 'billions of clothing items' by simply uploading a photo. In a blog post, the tech giant said that the new Doppl app builds on these capabilities and also introduces a bunch of new features, like the ability to convert static images into visuals by letting users take a look at how an outfit might look like using AI-generated videos. Users can also save their favourite looks, go through them and even share them with others. If users come across any outfit they like, say from a friend, local thrift shop or on social media, Google says they can take a picture of it and use Doppl to see how it looks on them. As of now, the app lets you try on things like tops, bottoms, and dresses. In case you are wondering, things like shoes, lingerie, bathing suits and accessories are not available for try-on. However, since the new app is still in experimental stages, the tech giant says it might not get things like appearance and clothing details right every time. This means that the app might sometimes 'imagine parts of an outfit if there are missing elements.' For example, if you upload a picture of only a shirt, Doppl will generate an image that includes pants, shoes and accessories matching the outfit. And while Doppl is available on both Android and iOS, it is currently limited to those living in the United States. Moreover, it is still unknown if the company plans to expand its availability to other regions. This isn't the first time Google has ventured into this space. Back in 2023, the tech giant introduced its virtual try-on technology, which allowed users to see how a piece of clothing would look on a wide range of models. Earlier this year, Glance, the popular consumer-centric AI-based software company, launched Glance AI, an app that helps users discover new products using AI.

Akhil Arora: Espire Hospitality Plans Major Expansion to Double Hotel Inventory by FY26, ET HospitalityWorld
Akhil Arora: Espire Hospitality Plans Major Expansion to Double Hotel Inventory by FY26, ET HospitalityWorld

Time of India

timean hour ago

  • Time of India

Akhil Arora: Espire Hospitality Plans Major Expansion to Double Hotel Inventory by FY26, ET HospitalityWorld

Exclusive It's a free content, simply login/signup to unlock Get in-depth Industry Insights and Analysis through our 'Exclusive' content, presented to you by our esteemed panel of writers, for free Continue with Google Continue with Linkedin Continue with Facebook Continue With Email ID More Sign in options By continuing, you agree to the T&C, Privacy Policy and Prohibited Content Policy. This same account can be used across all ET B2B portals.

Google launches Gemma 3n, multimodal Open Source AI model that runs on just 2GB RAM without internet
Google launches Gemma 3n, multimodal Open Source AI model that runs on just 2GB RAM without internet

India Today

timean hour ago

  • India Today

Google launches Gemma 3n, multimodal Open Source AI model that runs on just 2GB RAM without internet

Google has announced the full launch of its latest on-device AI model, Gemma 3n, which was first announced in May 2025. The AI model brings advanced multimodal capabilities, including audio, image, video and text processing, to smartphones and edge devices with limited memory and no internet connection. With this release, developers can now deploy AI features that used to require powerful cloud infrastructure, directly on phones and low-power the heart of Gemma 3n is a new architecture called MatFormer, short for Matryoshka Transformer. Google explains that much like Russian nesting dolls, the model includes smaller, fully-functional sub-models inside larger ones. This design makes it easy for developers to scale performance based on available hardware. For example, Gemma 3n is available in two versions: E2B, which operates on as little as 2GB of memory, and E4B, which requires about having 5 to 8 billion raw parameters, both models perform like much smaller models in terms of resource use. This efficiency comes from innovations like Per-Layer Embeddings (PLE), which shift some of the workload from the phone's graphics processor to its central processor, freeing up valuable 3n also introduces KV Cache Sharing, which significantly speeds up how quickly the model processes long audio and video inputs. Google says this improves response times by up to two times, making real-time applications like voice assistants or video analysis much faster and more practical on mobile For speech-based features, Gemma 3n includes a built-in audio encoder adapted from Google's Universal Speech Model. This allows it to perform tasks like speech-to-text and language translation directly on a phone. Early tests have shown especially strong results when translating between English and European languages like Spanish, French, Italian, and visual side of Gemma 3n is powered by MobileNet-V5, Google's new lightweight vision encoder. This system can handle video streams up to 60 frames per second on devices like the Google Pixel, enabling smooth real-time video analysis. Despite being smaller and faster, it outperforms previous vision models in both speed and can access Gemma 3n via popular tools like Hugging Face Transformers, Ollama, MLX, and others. Google has also launched the "Gemma 3n Impact Challenge," inviting developers to create applications using the model's offline capabilities. Winners will share a $150,000 prize the model can operate entirely offline, meaning it doesn't need an internet connection to work. This opens the door for AI-powered apps in remote areas or privacy-sensitive situations where cloud-based models aren't viable. With support for over 140 languages and the ability to understand content in 35, Gemma 3n sets a new standard for efficient, accessible on-device AI. - Ends

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store