logo
Is Google's AI secretly reading your private texts? Here's what's really happening—and how to stop it before it's too late

Is Google's AI secretly reading your private texts? Here's what's really happening—and how to stop it before it's too late

Time of India10-07-2025
Google's AI now reads your messages—what's really going on, and how can you stop it?-
Google's Gemini AI has officially begun scanning third-party apps like WhatsApp and Messages on Android devices—and that's raising serious privacy concerns; as per recent Forbes report. Starting July 7, Google quietly enabled Gemini to access your apps to help with daily tasks like sending messages, making calls, or setting timers. But here's the twist: while Google claims it won't use your message data for AI training, the lines between assistance and surveillance have started to blur. Many users didn't explicitly agree to this level of access—and they may not even realize it's happening.
What's really going on with Google and Microsoft AI reading texts?
A growing number of users are raising red flags about
AI privacy concerns with Google and Microsoft
, especially after reports revealed that their AI systems may be quietly analyzing private messages. While these tech giants claim it's to improve user experience with smarter replies and personalized suggestions, critics argue it's a massive overstep. The unsettling truth is that if you're using default settings in apps like Google Messages or Microsoft Teams, your texts could be scanned by AI without your full awareness. This silent data processing raises major ethical and security questions—and it's crucial that users understand how to take back control.
Google (and Microsoft)
have begun integrating AI into messaging platforms, and these systems are reportedly capable of scanning your private texts.
This goes beyond simple message summarization—they're
actively reading and processing
your conversations.
The intent behind this is to enhance user experience—like generating contextual replies, proactive suggestions, etc.—but the methods raise
serious privacy concerns
.
What is Google Gemini doing inside your messaging apps?
On July 7, 2025, Google rolled out an update allowing Gemini AI to access third-party apps on Android phones. This includes WhatsApp, Messages, and other popular chat apps, even if you didn't directly invite Gemini to do so. The company claims this helps users 'complete daily tasks,' such as reading messages aloud, replying to them, or initiating calls—but only when prompted.
However, according to Neowin and Ars Technica, there's a gray area. While Gemini isn't actively training on your private messages, it can still read your notifications, see your images, and interact with message content via Google Assistant or the Utilities app—even if you didn't enable it knowingly.
This raises big questions: If AI can 'assist' by reading your messages, how private are those messages anymore?
Live Events
Can Google Gemini access your WhatsApp chats?
Technically, yes—but only through indirect paths. Google claims Gemini isn't designed to directly open and scan your WhatsApp conversations. But the integration with Google Assistant changes the story.
Let's say you ask Assistant to read a WhatsApp message or reply to it—that's when Gemini gets access. At that point, the AI engine can read, interpret, and even summarize message content, including media. This level of access doesn't require a deep hack—it's already baked into the OS.
According to Kaspersky's Marc Rivero, this setup "raises serious privacy red flags." Why? Because users aren't giving
clear, informed consent
. 'Private messaging apps are among the most sensitive digital spaces,' he said. 'Automatic access by AI without explicit permission undermines trust.'
Is there a way to disable Gemini from reading your messages?
Unfortunately, there's no single kill switch to shut down Gemini's message-reading capabilities. Google hasn't made it easy to opt out completely. Instead, users need to manually disable Gemini's access app-by-app. This involves digging into Settings, locating Gemini App Activity, and turning off permissions for each app it might access.
Even then, there's no clear confirmation that your messages won't be touched via notifications or other indirect pathways. The system relies on user proactivity, not automatic privacy protections.
As Ars Technica put it bluntly: 'A significant number of users don't want Gemini or other AI engines anywhere near their devices. For now, those users are being left in the dark.'
How does this compare to Microsoft's controversial Recall feature?
Google's move follows a controversial rollout by Microsoft earlier this year. Windows 11's Recall feature introduced an AI memory tool that captures snapshots of everything done on your PC—including open text messages—and stores them locally for AI use. Only Signal managed to block it.
While Google hasn't gone that far, critics say Gemini's access to messaging apps is a similar breach of user expectations. Even if it's not being used to train AI models right now, the possibility of human review or long-term storage of private content is concerning.
What should you do right now to protect your privacy?
If you're an Android user, take these steps immediately:
Open your phone Settings
Go to Apps > Gemini > Permissions
Disable access to apps like WhatsApp, Messages, and others
Go to Google Assistant > Settings > Notifications and limit what it can see
Check Gemini App Activity and turn it off if you don't use the AI assistant
Also remember—your messages might not just involve you. If Gemini is scanning chats, it might be capturing content from others in your conversations, too, without their knowledge or consent.
Why this matters more than ever?
The conversation around AI privacy is no longer hypothetical. These updates prove that tech companies are embedding AI deeper into our personal lives, often with minimal disclosure. While these tools may offer convenience, they also bring new risks—especially in areas like messaging, which users assume are off-limits.
As Gemini and other AI tools evolve, it's crucial to ask hard questions:
Who controls the data AI sees?
Are users truly opting in, or just being quietly enrolled?
What transparency exists for users to audit this behavior?
Until answers become clearer, the best defense is awareness. Know your settings, limit permissions, and share this information—because privacy isn't just personal anymore.
FAQs:
Q1: How do I stop Google Gemini AI from reading my WhatsApp messages?
You can stop it by disabling Gemini's app access in your Android settings manually.
Q2: Is Google Gemini AI using my messages for AI training?
No, Google says Gemini does not use your messages for training, but it still reads them for tasks.
Economic Times WhatsApp channel
)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tech layoffs: TCS, Microsoft, Intel among 5 slashing thousands of jobs in apparent AI restructure
Tech layoffs: TCS, Microsoft, Intel among 5 slashing thousands of jobs in apparent AI restructure

India Today

time3 minutes ago

  • India Today

Tech layoffs: TCS, Microsoft, Intel among 5 slashing thousands of jobs in apparent AI restructure

Several global tech companies, including TCS, Microsoft, Intel, Meta, and Panasonic, are laying off thousands of employees this year as they adjust to changing business needs and the growing influence of artificial intelligence (AI). While some companies say the cuts are tied to efficiency and restructuring, others like TCS are pointing to internal challenges such as skill mismatches. Here is everything you need to know about the latest set of layoffsTata Consultancy Services (TCS), India's biggest IT firm, is reducing its global workforce by about 2 per cent, which could affect around 12,000 employees. However, the company insists this move is not due to AI-based productivity gains. In a recent interview, TCS CEO K Krithivasan clarified to Money Control that the layoffs are happening because of limited options to redeploy employees who do not match current project requirements. He said the company is still actively hiring and training people, but some roles, especially at mid and senior levels, are hard to explained that while over 5.5 lakh employees have been trained in basic digital skills and nearly 1 lakh in advanced technologies, some senior professionals are struggling to adapt beyond the initial training levels. As a result, the company is finding it difficult to deploy them job cuts Microsoft, too, has let go of more than 15,000 employees this year. On top of that, around 2,000 workers labelled as underperformers have also exited the company. This comes even as Microsoft is reporting strong earnings and hitting record-high stock prices. In a memo to employees, CEO Satya Nadella acknowledged the layoffs and said he understands how difficult this period has been for everyone. Despite the company's solid financial health, he said restructuring is necessary to stay aligned with long-term goals, especially as Microsoft continues to invest heavily, around $80 billion — in building AI job cutsIntel is making one of the biggest cuts this year, with plans to reduce its workforce by about 24,000 employees — roughly a quarter of its total staff. The decision was announced during the company's quarterly earnings update. Intel's new CEO, Lip-Bu Tan, said the company is focusing on becoming leaner and more efficient after overbuilding in areas where demand didn't materialise as expected. As part of this shift, Intel is cancelling some of its factory projects in Germany and Poland, and is moving some work from Costa Rica to Vietnam — impacting around 2,000 employees in Costa Rica layoffsMeanwhile, Meta has made fresh job cuts in its Reality Labs division, which looks after its VR and AR products, including games for its Quest headsets. While the company did not share exact numbers, teams working on some notable projects like the Supernatural fitness app were affected. Meta said the goal is to streamline operations and improve focus on future mixed reality experiences. Earlier this year, Meta had also cut 5 percent of its workforce in a separate round, targeting underperforming layoffsadvertisementJapanese tech giant Panasonic has also joined the list, with a plan to cut 10,000 jobs as part of a broader effort to reduce costs and invest more in future technologies like AI. About half of these cuts will be in Japan, with the rest overseas. CEO Yuki Kusumi said the company is shifting away from slower segments like TVs and some industrial products, and expressed regret over the decision but emphasised that these steps are necessary for long-term growth.- Ends

The chatbot culture wars are here
The chatbot culture wars are here

Indian Express

time3 minutes ago

  • Indian Express

The chatbot culture wars are here

For much of the past decade, America's partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices. Those battles aren't over. But a new one has already started. This fight is over artificial intelligence, and whether the outputs of leading AI chatbots such as ChatGPT, Claude and Gemini are politically biased. Conservatives have been taking aim at AI companies for months. In March, House Republicans subpoenaed a group of leading AI developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri's Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a 'new wave of censorship' by training their AI systems to give biased responses to questions about President Donald Trump. On Wednesday, Trump himself joined the fray, issuing an executive order on what he called 'woke AI.' 'Once and for all, we are getting rid of woke,' he said in a speech. 'The American people do not want woke Marxist lunacy in the AI models, and neither do other countries.' The order was announced alongside a new White House AI action plan that will require AI developers that receive federal contracts to ensure that their models' outputs are 'objective and free from top-down ideological bias.' Republicans have been complaining about AI bias since at least early last year, when a version of Google's Gemini AI system generated historically inaccurate images of the American Founding Fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading AI companies were training their models to parrot liberal ideology. Since then, top Republicans have mounted pressure campaigns to try to force AI companies to disclose more information about how their systems are built, and tweak their chatbots' outputs to reflect a broader set of political views. Now, with the White House's executive order, Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force AI companies to address their concerns. The order directs federal agencies to limit their use of AI systems to those that put a priority on 'truth-seeking' and 'ideological neutrality' over disfavored concepts such as diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria. If this playbook sounds familiar, it's because it mirrors the way Republicans have gone after social media companies for years — using legal threats, hostile congressional hearings and cherry-picked examples to pressure companies into changing their policies, or removing content they don't like. Critics of this strategy call it 'jawboning,' and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.) Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring AI companies through the federal procurement process is necessary to stop AI developers from putting their thumbs on the scale. Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its long-standing fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics. This time around, the critics cite examples of AI chatbots that seemingly refuse to praise Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as AI is integrated into fields such as education and health care. There are a few problems with this argument, according to legal and tech policy experts I spoke to. The first, and most glaring, is that pressuring AI companies to change their chatbots' outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration's argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech. 'What it seems like they're doing is saying, 'If you're producing outputs we don't like, that we call biased, we're not going to give you federal funding that you would otherwise receive,'' Genevieve Lakier, a law professor at the University of Chicago, said. 'That seems like an unconstitutional act of jawboning.' There is also the problem of defining what, exactly, a 'neutral' or 'unbiased' AI system is. Today's AI chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they're using. And testing an AI system for bias isn't as simple as feeding it a list of questions about politics and seeing how it responds. Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration's executive order would set 'a really vague standard that's going to be impossible for providers to meet.' There is also a technical problem with telling AI systems how to behave. Namely, they don't always listen. Just ask Elon Musk. For years, Musk has been trying to create an AI chatbot, Grok, that embodies his vision of a rebellious, 'anti-woke' truth seeker. But Grok's behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as 'Mecha-Hitler.') At other times, it acts like a liberal — telling users, for example, that human-made climate change is real, or that the right is responsible for more political violence than the left. Recently, Musk has lamented that AI systems have a liberal bias that is 'tough to remove, because there is so much woke content on the internet.' Nathan Lambert, a research scientist at the Allen Institute for AI, told me that 'controlling the many subtle answers that an AI will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.' It's not, in other words, as straightforward as telling an AI chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the 'model spec,' a set of instructions given to AI models about how they should act — there's no guarantee that these changes will consistently produce the behavior conservatives want. But asking whether the Trump administration's new rules can survive legal challenges, or whether AI developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, AI companies, like their social media predecessors, may find it easier to give in than to fight. 'Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,' Lakier said. 'I'm surprised by how easily these powerful companies have folded.'

TCS shares slip nearly 2% after company announces over 12,000 job cuts
TCS shares slip nearly 2% after company announces over 12,000 job cuts

Economic Times

time33 minutes ago

  • Economic Times

TCS shares slip nearly 2% after company announces over 12,000 job cuts

Shares of Tata Consultancy Services (TCS), India's largest IT exporter, slipped 1.7% to an intraday low of Rs 3,081.20 on BSE on Monday, after the company announced plans to lay off around 2% of its global workforce — roughly over 12,000 employees — over the year. ADVERTISEMENT The move comes amid growing macroeconomic uncertainty and increasing AI-led disruptions impacting technology demand. As of the end of June 2025, TCS employed 613,069 people globally. In a statement, the company said the layoffs would primarily impact middle and senior grades and are part of TCS's larger journey to become a 'future-ready organisation.' The company added that the deployment of some associates may no longer be feasible under current market company emphasized that the transition is being managed carefully to ensure continuity in client service. Affected employees will receive their full notice period compensation along with additional severance benefits. TCS also plans to provide insurance extensions, outplacement support, counseling, and transition decision follows closely on the heels of legal complaints filed by several employees against TCS's recently modified 'bench policy.' The updated policy reportedly allows just 35 annual days for employees to remain unassigned before being subject to performance-related action, and it requires a minimum of 225 billable days annually. ADVERTISEMENT The broader IT industry has also shown signs of a slowdown. According to a previous report by ET, job additions across the top six Indian IT majors fell sharply by over 72% in the April–June quarter, with only 3,847 new hires compared to 13,935 in the preceding the layoffs, TCS reaffirmed its commitment to long-term strategic initiatives, including investments in new-age technologies, entry into new markets, deployment of AI at scale, deeper partnerships, and the development of next-generation infrastructure. ADVERTISEMENT On Friday, TCS shares closed flat at Rs 3,134.35 on the BSE. Also read: NSDL IPO: Issue opens on July 30, here's what you need to know about GMP, issue details (Disclaimer: Recommendations, suggestions, views and opinions given by the experts are their own. These do not represent the views of The Economic Times) (You can now subscribe to our ETMarkets WhatsApp channel)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store