Latest news with #PrivateCloudCompute
&w=3840&q=100)

Business Standard
4 days ago
- Business Standard
Soon, WhatsApp will offer AI-powered summaries of unread messages: Details
WhatsApp is testing a new AI-driven feature that enables users to privately generate summaries of unread messages in both personal and group chats. As per WABetaInfo, the feature is backed by Meta AI but operates within a system called Private Processing, which is designed to safeguard user privacy. Earlier this year, Meta announced its plans to introduce cloud-based AI features on WhatsApp, including tools to summarise unread chats and provide writing suggestions. These capabilities now appear to be entering the beta testing stage. According to WABetaInfo, the message summary function is currently available to a limited group of Android beta testers, with a wider rollout anticipated in the coming weeks. AI summary for WhatsApp Unread Messages: How it works The report explains that once Private Processing is enabled in the app's settings, users will see a special button when they have multiple unread messages. Tapping this button sends a secure, private request to the Private Processing system, which then generates a summary of the unread content. The report highlights that this process remains fully private, with no user data being stored. Private Processing ensures that all data is handled locally on the user's device, without access by WhatsApp, Meta, or any third-party entity. User interactions remain anonymous and unlinkable to personal identity, thanks to encrypted connections and secure routing. This functionality can be particularly useful for quickly catching up on long threads in active group chats, offering a brief overview without needing to scroll through every message—while keeping user privacy intact. The AI message summary is entirely optional. Users who prefer not to use it can simply ignore the unread messages button or disable the feature via Private Processing settings. Additionally, users with Advanced Chat Privacy enabled will not have access to this summary function. What is Private Processing? Meta previously introduced Private Processing as a way to protect user data while handling AI requests in the cloud. The system, described as a confidential computing framework, mirrors Apple's Private Cloud Compute (PCC), which was introduced last year to support Apple Intelligence while preserving privacy. According to Meta, Private Processing enables secure handling of AI-based tasks like message summarisation and writing suggestions, without granting access to personal data. It creates a protected virtual cloud environment where tasks are performed without Meta—or anyone else—seeing the actual message content.
Yahoo
27-05-2025
- Business
- Yahoo
Apple Is Investing $500 Billion in These States Over the Next 4 Years
Apple is the world's third-largest company, with a market capitalization of around $3 trillion, so it came as no surprise when CEO Tim Cook announced that Apple would spend and invest a significant amount of money in the U.S. over the next four years. It was the amount of that commitment — $500 billion and the creation of 20,000 jobs — that made headlines. Read Next: For You: The February announcement described new projects and expansions for facilities in nine states. These states stand to benefit from Apple's investment: Arizona California Iowa Michigan Nevada North Carolina Oregon Texas Washington Consider This: Apple plans to divide its $500 billion investment among new projects and expansion of existing projects. Apple has selected Houston as the site of a new 250,000-square-foot manufacturing facility that will make energy-efficient servers for Apple Intelligence, the company's personal intelligence system. The servers are also integral to Apple's Private Cloud Compute security architecture, which is powered by AI processing. Although Apple will provide the infrastructure for server production, the servers are manufactured by Foxconn, a Taiwanese electronics company, according to a recent report from The New York Times. The plant will not build Apple's consumer products, which are primarily manufactured overseas. TSMC Fab 21, a fabrication plant in Arizona that already produces silicon for Apple, will see more business as a result of an increase in Apple's U.S. Advanced Manufacturing Fund. The $5 billion fund is doubling in size to $10 billion. Detroit will be home to a new academy Apple plans to build 'to train the next generation of U.S. manufacturers,' according to the statement. In addition to offering training and skills development, the Apple Manufacturing Academy will serve as a resource for companies looking to implement AI and smart manufacturing. Apple operates data centers throughout the U.S. and overseas. Its increased server production will allow the company to continue expanding its data center capacity in Arizona, Iowa, Nevada, North Carolina and Oregon. Seattle is among the sites expected to see some of the 20,000 new jobs Apple plans to create. Axios reported in February that Apple has already doubled the size of its Seattle team over the past three years. Apple is actively preparing students for careers in hardware engineering and silicon chip design through its New Silicon Initiative, which works with eight schools throughout the U.S. As part of its expansion, the program will collaborate with UCLA's Center for Education of Microchip Designers. While the $500 billion spending commitment is Apple's largest, it's not the only major commitment it has made in recent years. In 2018, during President Donald Trump's first term, the company said it would contribute $350 billion to the U.S. economy and create 20,000 jobs in the coming five years. As part of that commitment, Apple created the $5 billion Advanced Manufacturing Fund that the new commitment is expanding. In April 2021, shortly after former President Joe Biden took office, Apple made a similar commitment. In addition to pledging $430 billion in U.S. spending, including for a North Carolina campus the company later put on hold, Apple said it was on target to meet its 2018 hiring goal by 2023 and would add an additional 20,000 jobs by 2026. It's unclear whether Apple has met its 2018 or 2021 hiring and spending goals, or whether the $500 billion commitment consists solely of new U.S. investment. More From GOBankingRates Surprising Items People Are Stocking Up On Before Tariff Pains Hit: Is It Smart? Mark Cuban Tells Americans To Stock Up on Consumables as Trump's Tariffs Hit -- Here's What To Buy This article originally appeared on Apple Is Investing $500 Billion in These States Over the Next 4 Years


Time of India
01-05-2025
- Time of India
WhatsApp launches ‘Private Processing' to enhance AI chat privacy: Report
WhatsApp Private Processing Meta, the parent company of WhatsApp, has unveiled a new feature aimed at strengthening user privacy while engaging with artificial intelligence tools within the app. The feature, titled "Private Processing," is designed to allow users to interact with Meta AI in a more secure and confidential manner. Unlike standard AI chats or queries processed over traditional cloud infrastructure, Private Processing ensures that neither Meta, WhatsApp, nor third-party entities can access the user's data once the session ends. According to Meta's announcement, this function will be optional and is expected to roll out in the coming weeks. The company claims the system is designed with both privacy and auditability in mind, featuring enhanced protections against external and internal threats. Meta has also taken steps to make the system verifiable by independent parties and more resistant to cyberattacks, further aligning itself with evolving privacy standards in the tech industry. What is Private Processing Private Processing is a confidential AI interaction system being added to WhatsApp that enables users to interact with Meta AI without leaving data traces that can be accessed later. When enabled, it provides a temporary processing session for tasks such as generating AI summaries, retrieving information, or engaging in chat-based queries, all without storing or linking the user's messages to identifiable metadata once the interaction ends. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Maximize Your $200 Investment with AI-Powered Market Insights! News Portal Learn More Undo Key characteristics: User-initiated and entirely optional No retention of messages after the session ends Unauditable by Meta, WhatsApp, or third-party vendors post-session Supports end-to-end encryption principles WhatsApp's 'Private Processing' secures AI chats with no data stored Meta emphasises that Private Processing is built with security at its core. Once the AI completes a user's request, the session data is discarded, ensuring that: The system does not retain user messages, even temporarily, for future use. Even if a hacker gains access to Meta's infrastructure, they would be unable to access historical Private Processing interactions. Additional safeguards: Meta is integrating Private Processing into its bug bounty program, encouraging ethical hackers to identify potential vulnerabilities before launch. A detailed security engineering design paper will be released ahead of the full rollout, outlining the architecture, privacy logic, and threat models. Meta will allow independent audits to verify that the feature meets stated privacy expectations and performs securely in real-world environments. Meta's Private Processing vs. Apple's Private Cloud Compute The Private Processing model bears similarities to Apple's Private Cloud Compute (PCC), a system introduced for confidential cloud-based AI interactions. Both aim to achieve secure, privacy-respecting processing outside the user's device by using advanced cryptographic protocols and secure hardware environments. Feature Meta's Private Processing Apple's PCC Deployment Platform WhatsApp (cloud-based AI interaction) iOS/macOS devices with server fallback Default Status Optional and user-initiated On-device by default; uses PCC as fallback Data Retention No message retention post-session Minimal and encrypted when stored briefly Obfuscation Protocol OHTTP (Oblivious HTTP) via third-party OHTTP used for obscuring user IPs Auditability Independent third-party verification Apple audits and claims verifiable design While both systems use Oblivious HTTP (OHTTP) to hide user IP addresses from Meta or Apple, Meta's implementation is user-triggered, whereas Apple's approach favors on-device processing by default, switching to PCC when server-side processing is necessary. Role of OHTTP and third-party relays A core component of Private Processing is its reliance on Oblivious HTTP (OHTTP), a web standard that separates IP address visibility from the content being processed. Requests made to Meta's servers are relayed through independent third-party providers, ensuring that: Meta can see the request content but not the user's identity. The relay provider sees the IP address but not the content. This privacy split ensures no single party has access to both the user's identity and request content, creating a privacy-preserving pipeline for AI queries. Auditability and transparency measures To maintain public trust, Meta has built in mechanisms for external verification: Independent researchers and privacy watchdogs can audit Private Processing. The bug bounty program enables ongoing white-hat testing. A soon-to-be-published security white paper will provide the technical blueprint of the system, enabling academic scrutiny. Meta's approach aligns with emerging industry standards demanding that privacy-focused claims be independently verifiable and not rely solely on corporate assurances. Broader implications for messaging privacy and AI integration The introduction of Private Processing indicates a growing shift in how large tech companies balance AI capabilities with user privacy demands. As more users become concerned about data surveillance, profiling, and cyberthreats, features like Private Processing represent an effort to offer control back to the user while still allowing for advanced functionalities like chat-based AI support. With messaging apps becoming hubs for AI-powered tools, ensuring confidentiality of queries and outputs is critical to maintaining both compliance with global privacy regulations and user confidence. Private Processing launch timeline and availability According to Meta, Private Processing will: Be available to WhatsApp users in selected regions in the coming weeks. Roll out initially as an opt-in feature. Eventually integrate more AI capabilities as the infrastructure matures and proves secure. Users will be able to enable or disable the feature from within WhatsApp's AI tools settings, giving them complete control over when and how their data is processed. Also read | Genshin Impact Codes | Fruit Battlegrounds Codes | Blox Fruits Codes | Peroxide Codes
&w=3840&q=100)

Business Standard
30-04-2025
- Business
- Business Standard
WhatsApp's AI requests to be processed on cloud using private compute: Meta
Meta has introduced a cloud-based confidential computing system for WhatsApp, similar to Apple's Private Cloud Compute, to ensure user privacy during AI task processing Private Processing for WhatsApp (Image: Meta) New Delhi Meta has announced that it will soon enable cloud-based processing for artificial intelligence (AI) features on WhatsApp. These new capabilities — including summarising unread messages and providing writing suggestions — will be rolled out while upholding WhatsApp's security and privacy standards, the company said. To ensure privacy while processing user data on the cloud, Meta has introduced a system called Private Processing. Described as a confidential computing infrastructure, Private Processing allows AI requests to be handled in a protected cloud environment. The system mirrors Apple's Private Cloud Compute (PCC), introduced last year for its Apple Intelligence suite, which processes complex user requests while preserving data privacy. According to Meta, Private Processing allows AI tools to function securely without exposing personal user data. The system creates a confidential virtual environment in the cloud, ensuring that tasks such as message summarisation and writing suggestions are processed without access to the actual content by Meta or any third party. Meta outlines the process as follows When a user requests an AI function — such as summarising chats — WhatsApp sends the request to Meta's cloud servers. The system verifies that the request is coming from an authentic WhatsApp app on a legitimate device. The request is then encrypted and anonymised, ensuring Meta cannot identify the user or the origin of the request. Once the request reaches Meta's servers, it is processed inside a confidential virtual machine (CVM). Meta claims that no one, including the company itself, can view the contents being processed. The response — such as a message summary — is encrypted and sent back to the user's device. Only the originating device can decrypt and read the result. Meta states that it does not retain messages after processing is completed. Meta said it will begin rolling out Private Processing in the coming weeks. The first features to use the system will include AI-generated message summaries and writing suggestions. The company plans to expand the use of this privacy-preserving technology to more AI features in future updates.


Forbes
14-04-2025
- Forbes
Google's New Gmail Decision—What 3 Billion Users Must Do Now
All change for Gmail. Interesting times for Gmail. Google has confirmed that its two new headline upgrades don't actually work together, raising awkward questions for its 3 billion users. And so all those users now must grapple with a new upgrade decision, one with major implications for the future direction of the platform and for what Google does next. We're talking AI, security and privacy. The two upgrades that clash are a kind of quasi end-to-end encryption and an AI-fueled search alternative that promises a markedly different quality of results. Unfortunately, Google can't see your fully encrypted emails (rightly so), and must exclude those emails from its AI search results. This will not be a one-off situation. Cloud-based AI features and device-based security do not work well together, leaving users with some tough decisions to make. Apple had seemed to resolve this with its Private Cloud Compute, but then its disastrous Apple Intelligence delays made all that somewhat academic for the time being at least. Newsweek has now neatly framed the decision facing Gmail users: 'Allow Gmail to use AI-driven tools by enabling 'smart features' and data sharing, [or] But this isn't a simple yes or no. Gmail and other email platforms are driving towards an AI future and it won't be that easy to disable all the AI settings. Remember, Google has access to all your content anyway in its cloud servers. The only real opt-out (bar changing a bunch of admin settings) is to fully secure your emails. And on that, the caveats and qualifiers around Gmail's end-to-end encryption highlight how difficult a medium email is to fully secure. Even with Advanced Data Protection enabled, Apple Mail is one of the only exceptions to what can be end-to-end encrypted on iPhone. As Apple explains, 'iCloud Mail does not use end-to-end encryption because of the need to interoperate with the global email system,' albeit 'all native Apple email clients support optional S/MIME for message encryption.' Beyond email, this new AI decision is not specific to Gmail. You will see variations from multiple other platforms in the coming months. WhatsApp's new Advanced Chat Privacy stops users exporting entire chats or saving media to their phone gallery — easily bypassed of course, but also stops engagement with Meta AI within a chat. Ask yourself why. Again, new AI updates and security and privacy don't work well together. Meanwhile, OpenAI's ChatGPT 'will now remember your old conversations,' which sounds great until you consider the privacy implications of all that personal data being stored in a readily accessible way. In all likelihood nothing has changed from a data standpoint, except the optics. But even as Sam Altman posted his excitement at 'AI systems that get to know you over your life, and become extremely useful and personalized,' the move unsurprisingly 'sparked privacy concerns.' Users can choose to opt out of ChatGPT's update, but just as with Gmail, there is no sensible level of user education and understanding. What are the risks and trade-offs? What happens to your data and how is it secured and safeguarded? Is there even a common syntax to explain this choice across platforms? Specifically on Gmail — yes, you must decide whether you want AI marauding across your data, but you also need to be clear on the pros and cons of email itself. It's not a fully secure platform and is open to malware, phishing and spam. All of that will also get worse as the trickle of AI attacks becomes a tidal wave. In reality, there's a much larger rethink required. My recommendation is to keep all that in mind as you decide.