&w=3840&q=100)
Google Gemini to run in-app tasks even with App Activity off: What it means
In a move that could ease privacy concerns while enhancing usability, Google will soon allow its Gemini AI assistant to work with key system apps on Android — even if users have Gemini Apps Activity turned off. This change rolls out starting July 7, 2025, according to an email shared with users and reported by Android Police.
What's changing and why does it matter?
Until now, using Gemini to send messages, control calls, or manage device settings required keeping the Gemini Apps Activity setting enabled — which meant Google could store and analyse your chat history to improve its AI. With this update, users will be able to access Gemini's integrations with apps like Phone, Messages, WhatsApp, and system Utilities without opting into long-term activity tracking.
So if you have disabled Gemini Apps Activity to avoid saving your chat history to your Google account, you will still be able to send a WhatsApp message, call a contact, or set a timer using Gemini — something that was not possible before.
Android Police reports that the change allows Gemini to interact with these system features 'whether your Gemini Apps Activity is on or off.' While the setting used to be a gatekeeper for these app-based actions, Google is now decoupling basic app access from full AI data collection.
What about privacy?
As per the report, even with this improvement, Google notes that Gemini interactions will still be temporarily stored for up to 72 hours — regardless of your activity setting — for 'security, safety, and feedback' purposes.
The Gemini Apps Activity toggle primarily affects whether conversations are saved to your Google account and used to personalize and train AI models. Disabling it means your chats will not show up in your activity history and will not be used to improve Gemini — but they may still be briefly stored for backend processing.
According to a statement Google gave to Android Authority, users will be able to disable these app connections entirely if they prefer. The company emphasized this is a 'good' move for users, enabling more device control without forcing them to contribute data to AI training.
This update also arrives as Gemini is set to replace Google Assistant on Android phones later this year. With that shift, enabling Gemini to handle core assistant tasks — even when tracking is off — makes it better aligned with what users expect from a virtual assistant.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
9 minutes ago
- Business Standard
Samsung launches Galaxy M36 5G in India with AI features: Price, specs
Samsung has launched the Galaxy M36 5G smartphone in India. Powered by the Exynos 1380 chip, the smartphone offers a suite of artificial intelligence features. Samsung also said that the smartphone will be able to record 4K videos from both the front and the rear cameras. The Galaxy M36 will be available in three colourways: Orange Haze, Serene Green, and Velvet Black. Samsung Galaxy M36 5G: Price 6GB RAM +128GB Storage: Rs 17,499 8GB RAM + 128GB storage: Rs 18,999 8GB RAM + 256GB storage: Rs 21,999 Samsung Galaxy M36 5G: Availability and offers Samsung Galaxy M36 5G will go on sale in India starting July 12 on e-commerce platform Amazon India, Samsung's online store and select retail outlets. As for the introductory offer, Samsung is offering a bank discount of Rs 1000 on select bank cards. Samsung Galaxy M36 5G: Details The Galaxy M36 5G sports a 6.7-inch Full HD+ Super AMOLED display with a 120Hz refresh rate and Gorilla Glass Victus+ protection. The display has a tear drop design encompassing the front-facing camera. For imaging, the smartphone features a 50MP primary camera with optical image stabilisation (OIS). The triple rear camera system is completed by an 8MP ultra-wide and a 2MP macro camera. At the front, the Galaxy M36 sports a 13MP sensor for selfies, video calls and more. Additionally, Samsung said that the smartphone allows users to record 4K quality videos from both front and back. Samsung Galaxy M36 is powered by the Exynos 1380 chip and boots Android 15-based OneUI 7 out of the box, offering several AI-powered features such as AI Select, Object Eraser, Image Clipper and more. Beyond proprietary AI tools, the smartphone offers Google's Circle to Search and advanced Gemini AI features. Samsung is also offering six generations of OS upgrades on the smartphone. Samsung Galaxy M36 5G packs a 5000mAh battery and supports 25W wired charging. Samsung Galaxy M36 5G: Specifications


Economic Times
15 minutes ago
- Economic Times
AI at work: Job cuts and tech leader opinions
Amazon recently announced to nearly 350,000 of its employees that they must either relocate to one of its main office hubs like Seattle, Arlington (Virginia), and Washington, D.C., or leave the company without receiving severance pay. It is interesting to note how readily the company is willing to let go of employees simply to enforce a return to office-based work. However, this should not come as a surprise, given how Amazon CEO Andy Jassy has said time and again that AI adoption will reduce the company's corporate workforce. Like Amazon, many other major tech firms, including Meta, Microsoft, Google, and others, have also been affected by waves of layoffs, especially given the disruption brought about by artificial intelligence (AI). The United Nations Conference on Trade and Development (UNCTAD), in April, warned that AI could impact up to 40% of jobs worldwide. Also Read: Jobs AI won't replace: Anthropic cofounder Jack Clark names safest roles What CEOs say Global tech leaders have been expressing concerns about this development. In May, former Google CEO Eric Schmidt said professionals in many fields, including art and medicine, could become irrelevant if they do not adapt to AI. Around the same time, Nvidia CEO Jensen Huang said that every job is going to be affected by AI. 'You are not going to lose your job to AI, but you are going to lose your job to somebody who uses AI,' Huang said. Last month, Anthropic CEO Dario Amodei openly warned that AI could eat away nearly half of all entry-level white collar jobs, and soon. However, these are not just random claims but are backed by data. According to a report by McKinsey and Company, between 400 and 800 million jobs could be displaced worldwide within five years, depending on how quickly automation is shift could force around 375 million workers—14% of the global workforce—to transition into entirely new it comes to India, the Economic Survey 2024-25 has raised similar concerns and called attention to the heightened worries of workers and the speed at which AI is transforming the labour market. Hitesh Oberoi, CEO of Info Edge (which operates recently said AI isn't just about job cuts but changing the nature of work. He emphasised the need to focus on developing new skills. Zoho founder Sridhar Vembu, however, gave a more radical view on this issue. He said on X, 'The productivity revolution I see coming to software development (LLMs + tooling) could destroy a lot of software jobs. This is sobering but necessary to internalise.' The other side However, not all CEOs are eager to expand AI use. Klarna Group, a fintech firm, has chosen to reduce its AI-powered customer service. CEO Sebastian Siemiatkowski explained that the model led to a drop in service quality, and the company is now adjusting its approach. 'Really investing in the quality of human support is the way of the future for us,' he workers, too, have adopted a positive approach to AI integration in the workplace. A study by SnapLogic found that 81% of office workers believe AI enhances their job performance and overall work there is a diverse range of opinions on this topic, it is clear that workers will have to adapt to the changes brought in by the AI revolution.


Time of India
17 minutes ago
- Time of India
Meet Gemma 3n: Google's lightweight AI model that works offline with just 2GB RAM
Google has officially rolled out Gemma 3n, its latest on-device AI model first teased back in May 2025. What makes this launch exciting is that Gemma 3n brings full-scale multimodal processing think audio, video, image, and text straight to smartphones and edge devices, all without needing constant internet or heavy cloud support. It's a big step forward for developers looking to bring powerful AI features to low-power devices running on limited memory. At the core of Gemma 3n is a new architecture called MatFormer short for Matryoshka Transformer. Think Russian nesting dolls: smaller, fully-functional models tucked inside bigger ones. This clever setup lets developers scale AI performance based on the device's capability. You get two versions E2B runs on just 2GB RAM, and E4B works with around 3GB. Despite packing 5 to 8 billion raw parameters, both versions behave like much smaller models when it comes to resource use. That's thanks to smart design choices like Per-Layer Embeddings (PLE), which shift some of the load from the GPU to the CPU, helping save memory. It also features KV Cache Sharing, which speeds up processing of long audio and video inputs by nearly 2x perfect for real-time use cases like voice assistants and mobile video analysis. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Contribute ToGau Seva At Hare Krishna Mandir Hare krishna Mandir Donate Now — GoogleDeepMind (@GoogleDeepMind) Gemma 3n isn't just light on memory it's stacked with serious capabilities. For speech-based features, it uses an audio encoder adapted from Google's Universal Speech Model, which means it can handle speech-to-text and even language translation directly on your phone. It's already showing solid results, especially when translating between English and European languages like Spanish, French, Italian, and Portuguese. On the visual front, it's powered by Google's new MobileNet-V5—a lightweight but powerful vision encoder that can process video at up to 60fps on phones like the Pixel. That means smooth, real-time video analysis without breaking a sweat. And it's not just fast—it's also more accurate than older models. You Might Also Like: Google DeepMind CEO warns of AI's true threat, and it is not your job Developers can plug into Gemma 3n using popular tools like Hugging Face Transformers, Ollama, MLX, and more. Google's also kicked off the Gemma 3n Impact Challenge , offering a $150,000 prize pool for apps that showcase the model's offline magic. The best part? Gemma 3n runs entirely offline. No cloud, no connection just pure on-device AI. With support for over 140 languages and the ability to understand content in 35, it's a game-changer for building AI apps where connectivity is patchy or privacy is a priority. Here's how you can try it out - Want to try Gemma 3n for yourself? Here's how you can get started: You Might Also Like: DeepMind scientist calls LLMs 'exotic mind-like entities': Why the future of AI needs a new vocabulary? Experiment instantly – Head over to Google AI Studio, where you can play around with Gemma 3n in just a few clicks. You can even deploy it directly to Cloud Run from there. Download the model – Prefer working locally? You'll find the model weights available on Hugging Face and Kaggle. Dive into the docs – Google's got solid documentation to help you integrate Gemma into your workflow. Start with inference, fine-tuning, or build from scratch. Use your favorite tools – Whether you're into Ollama, MLX, Docker, or Google's AI Edge Gallery—Gemma 3n fits right in. Bring your own dev stack – Already using Hugging Face Transformers, TRL, NVIDIA NeMo, Unsloth, or LMStudio? You're covered. Deploy it your way – Push to production with options like Google GenAI API, Vertex AI, SGLang, vLLM, or even the NVIDIA API Catalog.