logo
Google's New Gmail Decision—What 3 Billion Users Must Do Now

Google's New Gmail Decision—What 3 Billion Users Must Do Now

Forbes14-04-2025

All change for Gmail.
Interesting times for Gmail. Google has confirmed that its two new headline upgrades don't actually work together, raising awkward questions for its 3 billion users. And so all those users now must grapple with a new upgrade decision, one with major implications for the future direction of the platform and for what Google does next.
We're talking AI, security and privacy. The two upgrades that clash are a kind of quasi end-to-end encryption and an AI-fueled search alternative that promises a markedly different quality of results. Unfortunately, Google can't see your fully encrypted emails (rightly so), and must exclude those emails from its AI search results.
This will not be a one-off situation. Cloud-based AI features and device-based security do not work well together, leaving users with some tough decisions to make. Apple had seemed to resolve this with its Private Cloud Compute, but then its disastrous Apple Intelligence delays made all that somewhat academic for the time being at least.
Newsweek has now neatly framed the decision facing Gmail users: 'Allow Gmail to use AI-driven tools by enabling 'smart features' and data sharing, [or]
But this isn't a simple yes or no. Gmail and other email platforms are driving towards an AI future and it won't be that easy to disable all the AI settings. Remember, Google has access to all your content anyway in its cloud servers. The only real opt-out (bar changing a bunch of admin settings) is to fully secure your emails.
And on that, the caveats and qualifiers around Gmail's end-to-end encryption highlight how difficult a medium email is to fully secure. Even with Advanced Data Protection enabled, Apple Mail is one of the only exceptions to what can be end-to-end encrypted on iPhone. As Apple explains, 'iCloud Mail does not use end-to-end encryption because of the need to interoperate with the global email system,' albeit 'all native Apple email clients support optional S/MIME for message encryption.'
Beyond email, this new AI decision is not specific to Gmail. You will see variations from multiple other platforms in the coming months. WhatsApp's new Advanced Chat Privacy stops users exporting entire chats or saving media to their phone gallery — easily bypassed of course, but also stops engagement with Meta AI within a chat. Ask yourself why. Again, new AI updates and security and privacy don't work well together.
Meanwhile, OpenAI's ChatGPT 'will now remember your old conversations,' which sounds great until you consider the privacy implications of all that personal data being stored in a readily accessible way. In all likelihood nothing has changed from a data standpoint, except the optics. But even as Sam Altman posted his excitement at 'AI systems that get to know you over your life, and become extremely useful and personalized,' the move unsurprisingly 'sparked privacy concerns.'
Users can choose to opt out of ChatGPT's update, but just as with Gmail, there is no sensible level of user education and understanding. What are the risks and trade-offs? What happens to your data and how is it secured and safeguarded? Is there even a common syntax to explain this choice across platforms?
Specifically on Gmail — yes, you must decide whether you want AI marauding across your data, but you also need to be clear on the pros and cons of email itself. It's not a fully secure platform and is open to malware, phishing and spam. All of that will also get worse as the trickle of AI attacks becomes a tidal wave. In reality, there's a much larger rethink required. My recommendation is to keep all that in mind as you decide.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Behind the Curtain: What if they're right?
Behind the Curtain: What if they're right?

Axios

time13 minutes ago

  • Axios

Behind the Curtain: What if they're right?

During our recent interview, Anthropic CEO Dario Amodei said something arresting that we just can't shake: Everyone assumes AI optimists and doomers are simply exaggerating. But no one asks: "Well, what if they're right?" Why it matters: We wanted to apply this question to what seems like the most outlandish AI claim — that in coming years, large language models could exceed human intelligence and operate beyond our control, threatening human existence. That probably strikes you as science-fiction hype. But Axios research shows at least 10 people have quit the biggest AI companies over grave concerns about the technology's power, including its potential to wipe away humanity. If it were one or two people, the cases would be easy to dismiss as nutty outliers. But several top execs at several top companies, all with similar warnings? Seems worth wondering: Well, what if they're right? And get this: Even more people who are AI enthusiasts or optimists argue the same thing. They, too, see a technology starting to think like humans, and imagine models a few years from now starting to act like us — or beyond us. Elon Musk has put the risk as high as 20% that AI could destroy the world. Well, what if he's right? How it works: There's a term the critics and optimists share: p(doom). It means the probability that superintelligent AI destroys humanity. So Musk would put p(doom) as high as 20%. On a recent podcast with Lex Fridman, Google CEO Sundar Pichai, an AI architect and optimist, conceded: "I'm optimistic on the p(doom) scenarios, but ... the underlying risk is actually pretty high." But Pichai argued that the higher it gets, the more likely that humanity will rally to prevent catastrophe. Fridman, himself a scientist and AI researcher, said his p(doom) is about 10%. Amodei is on the record pegging p(doom) in the same neighborhood as Musk's: 10-25%. Stop and soak that in: The very makers of AI, all of whom concede they don't know with precision how it actually works, see a 1 in 10, maybe 1 in 5, chance it wipes away our species. Would you get on a plane at those odds? Would you build a plane and let others on at those odds? Once upon a time, this doomsday scenario was the province of fantasy movies. Now, it's a common debate among those building large language models (LLMs) at giants like Google and OpenAI and Meta. To some, the better the models get, the more this fantastical fear seems eerily realistic. Here, in everyday terms, is how this scenario would unfold: It's already a mystery to the AI companies why and how LLMs actually work, as we wrote in our recent column, "The scariest AI reality." Yes, the creators know the data they're stuffing into the machine, and general patterns LLMs use to answer questions and "think." But they don't know why the LLMs respond the way they do. Between the lines: For LLMs to be worth trillions of dollars, the companies need them to analyze and "think" better than the smartest humans, then work independently on big problems that require complex thought and decision-making. That's how so-called AI agents, or agentics, work. So they need to think and act like Ph.D. students. But not one Ph.D. student. They need almost endless numbers of virtual Ph.D. students working together, at warp speed, with scant human oversight, to realize their ambitions. "We (the whole industry, not just OpenAI) are building a brain for the world," OpenAI CEO Sam Altman wrote last week. What's coming: You'll hear more and more about artificial general intelligence (AGI), the forerunner to superintelligence. There's no strict definition of AGI, but independent thought and action at advanced human levels is a big part of it. The big companies think they're close to achieving this — if not in the next year or so, soon thereafter. Pichai thinks it's " a bit longer" than five years off. Others say sooner. Both pessimists and optimists agree that when AGI-level performance is unleashed, it'll be past time to snap to attention. Once the models can start to think and act on their own, what's to stop them from going rogue and doing what they want, based on what they calculate is their self-interest? Absent a much, much deeper understanding of how LLMs work than we have today, the answer is: Not much. In testing, engineers have found repeated examples of LLMs trying to trick humans about their intent and ambitions. Imagine the cleverness of the AGI-level ones. You'd need some mechanism to know the LLMs possess this capability before they're used or released in the wild — then a foolproof kill switch to stop them. So you're left trusting the companies won't let this happen — even though they're under tremendous pressure from shareholders, bosses and even the government to be first to produce superhuman intelligence. Right now, the companies voluntarily share their model capabilities with a few people in government. But not to Congress or any other third party with teeth. It's not hard to imagine a White House fearing China getting this superhuman power before the U.S. and deciding against any and all AI restraints. Even if U.S. companies do the right thing, or the U.S. government steps in to impose and use a kill switch, humanity would be reliant on China or other foreign actors doing the same. When asked if the government could truly intervene to stop an out-of-control AI danger, Vice President Vance told New York Times columnist Ross Douthat on a recent podcast: "I don't know. Because part of this arms-race component is: If we take a pause, does [China] not take a pause? Then we find ourselves ... enslaved to [China]-mediated AI." That's why p(doom) demands we pay attention ... before it's too late.

This epic ChatGPT discount is too good to resist if you run a small team — save 96%
This epic ChatGPT discount is too good to resist if you run a small team — save 96%

Tom's Guide

time15 minutes ago

  • Tom's Guide

This epic ChatGPT discount is too good to resist if you run a small team — save 96%

Yes, ChatGPT can be yours to use completely free of charge. But these days, all of the good features are locked behind a paywall. The good news is that if you've been eyeing up some of the fancier features of ChatGPT, now is a great time to test them out. Right now, OpenAI is offering a pretty hefty 96% discount on ChatGPT Teams. While that sounds like something that you would need your own business for, anybody can sign up for ChatGPT Teams, offering access to some of the best paid for features to up to five people on one account. The first step in getting this deal is to head over to this link on the ChatGPT website. There you will see a discounted version of ChatGPT Teams, showing the price down from $30/£30 a month to just $1/£1. The deal only lasts for one month, but can be accessed by up to five people in that time. When you arrive on this page, it will ask you to either sign in to your account, or create a new one to verify your eligibility for the promotion. Even if you already have an account, you can still get the discount. Once you've signed in or created a new account, simply continue through and make the $1/£1 payment. If you know you don't want to pay the $30/£30 the following month, head to the account settings and go into manage subscription or plan section. From here, there will be a team plan section which will allow you to cancel. Once your account is created, you'll be asked to set up a workspace. From here, you can invite people to join that workspace and get access to all of the features of ChatGPT Teams. The Teams plan is one step up from ChatGPT Pro. That includes higher limits on the model's output and a faster performance. You can tailor ChatGPT to your work file uploads, projects and lock its memory to the files that you work with most. This plan includes dedicated workspaces. This allows you to organize AI interactions into departments, projects and general structures. This is especially useful if you're wanting to keep images, conversations and ideas in separate areas. One of the most useful features of this plan, especially for businesses is that ChatGPT doesn't train off any data produced by a Team account. This means you can upload work files or come up with ideas safely. It includes unlimited access to GPT-4o — ChatGPT's advanced reasoning model and offers all of the important advanced features of ChatGPT like the ability to analyse visual data and generate images. If you decide to keep the plan past the first month, everything revolves around the interactions of a team, allowing you to manage users, set up admin controls and give different account members roles and levels of access.

Can New Nanosheet Chip Nodes Cement TSM's Long-Term Tech Leadership?
Can New Nanosheet Chip Nodes Cement TSM's Long-Term Tech Leadership?

Yahoo

time36 minutes ago

  • Yahoo

Can New Nanosheet Chip Nodes Cement TSM's Long-Term Tech Leadership?

Taiwan Semiconductor Manufacturing Company TSM, also known as TSMC, is reinforcing its nanosheet chip technology roadmap with N2, N2P, and A16, which are aimed at extending performance and efficiency at advanced nodes. Out of these, the A16 advanced node has been specifically designed to accommodate high-performance computing (HPC) products and AI-class workloads. These advanced nodes are expected to extend TSMC's technology leadership position and enable it to capture growth opportunities into the Semiconductor's N2 logic node is the first generation of nanosheet transistor technology and offers 10-15% speed improvement, 25-30% power improvement, and more than 15% chip density gain when compared with N3E. The logic node is on track to enter volume production in the second half of 2025. Moreover, TSMC also expects the number of new tape-outs for N2, fueled by both smartphone and HPC applications, to be higher than both 3-nanometer and 5-nanometer in their first two Semiconductor is winning important customers, such as AMD and Apple, for its new N2 advanced logic node. Apple plans to use these chips in future iPhones and Mac devices, while AMD wants to use them to make faster CPUs and GPUs to compete better in the in the first quarter of 2025, N2P is an extension of the N2 family, which features improved performance and power efficiency. A16 is also a nanosheet-based technology with backside power delivery and is intended to be a logical upgrade to N2P. The A16 logic node, also introduced in the first quarter, is expected to deliver up to 15-20% power improvement and an additional 7-10% chip density gain when compared with fabs under construction in Taiwan and Arizona to support both N2 and A16 and volume production scheduled for N2P and A16 in the second half of 2025, Taiwan Semiconductor appears well-positioned to benefit from the next wave of AI and high-performance computing demand. Two key competitors, Intel INTC and GlobalFoundries GFS, are working hard to catch up with TSMC's lead. Intel plans to launch its 2nm-based node, called Intel 18A, in the second half of 2025. Intel is focusing on new technologies like RibbonFET transistors and backside power delivery for its 18A node, similar to TSMC's GlobalFoundries is not focused on 2nm chips. Instead, it is investing in specialized chips for cars, wireless devices, and the IoTs. While GlobalFoundries does not directly compete with Taiwan Semiconductor at the most advanced nodes, it still aims to take market share in fast-growing chip segments. Shares of Taiwan Semiconductor have gained 9.7% year to date compared with the Semiconductor - Circuit Foundry industry's growth of 8.7%. Image Source: Zacks Investment Research From a valuation standpoint, TSMC trades at a forward price-to-sales ratio of 9.08X, slightly higher than the industry's average of 9.02X. Image Source: Zacks Investment Research The Zacks Consensus Estimate for TSMC's 2025 and 2026 earnings implies year-over-year growth of 30.54% and 14.80%, respectively. The estimates for 2025 have been revised upward in the past 60 days, while the estimates for 2026 have witnessed a downward revision over the same time frame. Image Source: Zacks Investment Research TSMC currently carries a Zacks Rank #3 (Hold). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Intel Corporation (INTC) : Free Stock Analysis Report Taiwan Semiconductor Manufacturing Company Ltd. (TSM) : Free Stock Analysis Report GlobalFoundries Inc. (GFS) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store