logo
#

Latest news with #SafetyCore

Google Will Blur Photos With 'Containing Nudity' Label In Messages App: Here's How
Google Will Blur Photos With 'Containing Nudity' Label In Messages App: Here's How

News18

time04-05-2025

  • News18

Google Will Blur Photos With 'Containing Nudity' Label In Messages App: Here's How

Google Messages app is getting a new safety tool that looks to protect the young users and also flag people about explicit content. Google has begun rolling out a new feature in its Messages app that automatically filters photos flagged for containing nudity. Using on-device AI, the feature identifies sensitive content before a user views, sends, or forwards such media, providing clear warnings. First introduced late last year, this initiative is part of Google's broader effort to promote safer online communication. Supported by Android's SafetyCore, Messages app analyses all content locally on the device, ensuring that no image data or identifying information is sent to Google servers, as reported by 9To5Google. This approach aims to protect user privacy while helping them avoid potentially harmful situations. When the feature is turned on, photos that are suspected of being nude are automatically blurred. There is a warning message with choices like 'Learn why nude images can be harmful," 'Block this number," and a simple request that reads 'No, don't view" or 'Yes, view." After viewing the image, users can also choose to re-blur it by selecting the 'Remove preview" option. According to the report, users under the age of 18 are automatically subject to these warnings. The feature is entirely controlled through the Family Link app and cannot be disabled for supervised users, who are usually kids with parental controls. The feature is enabled by default for unsupervised teenagers between the ages of 13 and 17, however, it may be deliberately turned off through Google Messages settings. The feature is optional for adults and stays inactive unless manually activated. Additionally, the safety system steps in before users try to share or forward potentially offensive images. The sender will be prompted with a confirmation step in Google Messages if such content is identified: 'Yes, send" or 'No, don't send." The purpose is to urge users to reevaluate impulsive decisions by introducing a deliberate delay, or 'speed bump," rather than completely blocking actions. The feature's availability is still restricted, even though it was formally announced in October last year and started to roll out in phases starting in February this year. Early testing indicates that the setting, which is found under Messages > Protection & Safety > Manage sensitive content alerts, has only been shown on a small number of beta devices thus far, indicating that a wider rollout is still ongoing. First Published:

Samsung's Android Update—What Galaxy Owners Must Do Now
Samsung's Android Update—What Galaxy Owners Must Do Now

Forbes

time25-04-2025

  • Forbes

Samsung's Android Update—What Galaxy Owners Must Do Now

Here's what Galaxy owners must do Samsung will be delighted to put its One UI 7 rollout in the rearview mirror, and now seems to be accelerating its upgrade process after months of delays and frustrations. In addition to a raft of feature updates, the new OS brings major security and privacy improvements from both Google and Samsung, and a new, Apple-like ecosystem. After waiting so long, Galaxy owners will be keen to jump straight in. But there's a timely note of caution from SammyFans before you do: The "essential practice of downloading apps updates that you should perform after completing the installation." This won't come as a surprise — but it's easy to overlook as your phone reboots. Most of these updates will be around alignment with the new OS to ensure a seamless, bugfree experience. But there will also be security updates and patches that you need. 'A new OS update incorporates security patches and better data protection, which also improves user data stored within the app.' But while updating third-party apps is fairly obvious, you also need to update stock Google apps on your phone and, just as critically, the background Play and other services that run your phone need to be updated. Open both Google's and Samsung's stores and check for updates. You'll also find services updates there. There is a note of caution here. One recent background update that's causing controversy is Google's SafetyCore app. This was installed across Android's ecosystem some months ago without any notification or warning. It provides a content scanning capability that is run on-device and doesn't share data with Google or anyone else. Its first live application is scanning images for nudity within Google Messages, whether those are being sent or received. It's on by default for minors and off for adults. You can find details on disabling this photo scanning and uninstalling SafetyCore here. It's likely to be reinstalled with future Play Services updates though, so check regularly if you do decide you don't want it on your phone. Per SammyFans, here are step-by-step instructions on updating your apps: "Google Play Galaxy Store Just as you finally get Android 15, its successor Android 16 has reached Beta 4. This is due for a stable release in the summer, likely July, and the question now is how much longer will Samsung owners wait behind Pixels for the upgrade. If you have a Galaxy S25 you should certainly expect a fast update. That has now become critical.

Google Messages now blurs nudity by default with on-device AI
Google Messages now blurs nudity by default with on-device AI

India Today

time22-04-2025

  • India Today

Google Messages now blurs nudity by default with on-device AI

Google has begun rolling out a new feature in its Messages app that automatically blurs images flagged as containing nudity. The feature, first announced late last year, uses on-device AI to detect sensitive content and issue clear warnings before a user can view, send, or forward such media. The sensitive content warning system is part of Google's broader initiative to promote safer digital communication. Backed by the Android System's SafetyCore, the technology is designed so that all content analysis happens locally on the device — meaning no image data or identification information is sent to Google servers. This aims to protect users' privacy while helping them navigate risky the feature enabled, images flagged as possibly containing nudity are automatically blurred. A warning message appears with options such as "Learn why nude images can be harmful," "Block this number," and a clear prompt — "No, don't view" or "Yes, view." Users can also choose to re-blur the image after viewing by tapping a "Remove preview" states that these warnings are turned on by default for users under 18. For supervised users — typically children with parental controls — the feature cannot be turned off and is fully managed through the Family Link app. For unsupervised teens aged 13 to 17, the feature is enabled by default but can be manually disabled via Google Messages settings. For adults, the feature is opt-in and remains off unless manually The safety system also intervenes before users attempt to send or forward potentially explicit images. If such content is detected, Google Messages will prompt the sender with a confirmation step: "Yes, send" or "No, don't send." The idea isn't to block actions entirely but to introduce a thoughtful pause — a "speed bump" — to encourage users to reconsider impulsive of now, the feature is limited to image-based content and does not apply to videos. It also works only when the image is shared through Google Messages with sensitive content warnings turned on. Other apps must explicitly integrate with SafetyCore for similar the feature was officially announced in October and began rolling out in phases from February, its availability remains limited. According to early tests, the setting — located under Messages > Protection & Safety > Manage sensitive content warnings — has only appeared on a few beta devices so far, suggesting a broader rollout is still in progress.

Google Messages rolls out Sensitive Content Warning to blur explicit images
Google Messages rolls out Sensitive Content Warning to blur explicit images

Business Standard

time22-04-2025

  • Business Standard

Google Messages rolls out Sensitive Content Warning to blur explicit images

Google Messages is now reportedly rolling out Sensitive Content Warnings which will blur explicit images on Android. According to a report by 9To5Google, for users under the age of 18, this feature is enabled by default; as for users above the age of 18, this is optional and is disabled by default. According to 9To5Google, this hasn't been widely rolled out yet and just appeared on two devices running the latest beta version of Messages. To curb the exposure of children to explicit content, Google has divided minors into two categories - Supervised users and Unsupervised teens (13-17 years of age). For supervised users, this feature cannot be turned off, but parents will be able to control it through the Family Link app; however, as for unsupervised teens, this feature can be disabled in Google Account settings. The report explained that this feature works in two ways. First, if an image might contain nudity, it will be automatically blurred. You will get the choice to delete it before opening, along with these options: Learn why explicit images can be risky. Block the sender. Choose to view or not view the image. If you decide to view it but change your mind, you can blur it again by tapping 'Remove preview' in the corner. The second part of the feature reportedly steps in when you're about to send or forward an image that may contain nudity. It gives you a warning about the risks and asks for confirmation before it lets you send it. How does the classification work? This image detection feature—currently limited to photos and not videos—runs entirely on the device itself. It uses Android's SafetyCore system, which ensures that no personal data or classified images are shared with Google's servers. SafetyCore only activates when an app chooses to use it and explicitly asks for content analysis. For instance, images won't be scanned unless they're being sent via Google Messages with the Sensitive Content Warnings option enabled.

Google Starts Scanning Your Photos—3 Billion Users Must Now Decide
Google Starts Scanning Your Photos—3 Billion Users Must Now Decide

Forbes

time22-04-2025

  • Forbes

Google Starts Scanning Your Photos—3 Billion Users Must Now Decide

Big brother is here. getty When Google added photo scanning technology to Android phones, it caused a huge backlash, with the company accused of 'secretly' installing new monitoring technology on Android phones 'without user permission.' At the time, Google assured me that SafetyCore was an enabling framework and would not actually start scanning photos or other content. The new app, it said, 'provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.' Well that time has now come and it starts with Google Messages. As reported by 9to5Google, 'Google Messages is rolling out Sensitive Content Warnings that blur nude images on Android.' Not only does it blur content, but it also warns that such imagery can be harmful and provides options to view explicit content or block numbers. This AI scanning takes place on device, and Google also assures that nothing is sent back to them. Android hardener GrapheneOS backed up that claim: SafetyCore 'doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.' AI photo monitoring is here 9to5Google But GrapheneOS also lamented that "it's unfortunate that it's not open source and released as part of the Android Open Source Project and the models also aren't open let alone open source… We'd have no problem with having local neural network features for users, but they'd have to be open source.' Back to that secrecy point, again. The Google Messages update was expected. The question now is what comes next. And the risk is that the capability is being introduced at the same time as secure, encrypted user content is under increasing pressure from legislators and security agencies around the world. Each time such technology is introduced, privacy advocates push back. For now the feature is disabled by default for adults but enabled by default for children. Adults can decide to enable the new safety measures in Google Messages Settings, under Protection & Safety— Manage sensitive content warnings. Depending on a child's age, their settings can only be changed in either their account settings or Family Link. This doesn't end here, and so just as with Gmail and other platforms, Google's 3 billion Android, email and other users will need to decide what level of AI scanning, monitoring and analysis they're comfortable with and where they draw the line. This is on-device, but many of the new updates don't have that same privacy protection. AI monitoring is here to stay and will take some getting used to. As Phone Arena points out, the new photo scanning 'also works in reverse; if you try to send or forward an image that might be considered sensitive, Messages will flash a heads-up to let you know what you're about to share, and you'll have to confirm before it goes through.' Welcome to the brave new world of 'big brother' AI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store