Latest news with #SafetyCore


Phone Arena
33 minutes ago
- Phone Arena
New safety feature in Google Messages rolls out which detects and blurs explicit images
The feature works by processing and classifying images entirely on-device through Android System SafetyCore. According to Google, no identifiable data or the classified content itself is sent to the company's servers, and users must be signed in to their Google Account in Messages for the tool to function. When a blurred image is detected, you can choose to: Learn why nude images can be harmful Block the sender's number View the image after confirming your choice Return to the conversation without opening it The system also issues a prompt when users attempt to send or forward a nude image. Additionally, users will be reminded of the risks and must confirm before the message goes through. For adults (18+), the feature is turned off by default but can be enabled via Google Messages Settings > Protection & Safety > Manage sensitive content warnings > Warnings in Google Messages. The rules differ for younger users: supervised accounts cannot turn it off without parental control via Family Link, while unsupervised teens aged 13–17 can disable it in their Google Account settings. Settings to manage sensitive content warnings. | Images credit — 9to5Google Apple introduced a similar system called Communication Safety in iMessage, which blurs sexually explicit images for children's accounts and provides safety resources. Like Google's approach, Apple's detection also happens on-device, aiming to protect privacy while adding an extra layer of safety. However, Apple's version is primarily aimed at minors, whereas Google's covers both adult and teen users, with different default settings based on age. On one hand, Google's Sensitive Content Warnings could help reduce harmful or unwanted exposure, especially for younger users. Having the detection happen on-device with no image data sent to servers should also help ease privacy concerns. On the other hand, some users may find the prompts intrusive, particularly in adult conversations where consent is already established. The fact that adults must enable the feature manually might also limit its adoption. That said, this rollout targets a real problem that needs a solution — particularly when it comes to minor. If having to tweak your settings or put up with some annoying prompts are the price to pay, it's up to each individual to decide if it's worth it.


Time of India
11 hours ago
- Time of India
Google Messages adds sensitive content warning feature to protect users from nudity
Google is rolling out a new safety feature for its Messages app on Android. As reported by 9to5Google, the tech giant is introducing a new Sensitive Content Warnings feature for Android users. The feature is designed to detect and blur images containing nudity. The feature is designed to protect users from users from unsolicited explicit content while preserving privacy through on-device processing. The feature uses a new Android system service called "SafetyCore" to perform all classification and blurring locally on the user's phone. How sensitive content warnings feature works Automatic Detection & Blurring: Images flagged as containing nudity are blurred before being displayed. On-Device Privacy: All image classification happens locally using Android's SafetyCore system—no data is sent to Google servers. User Controls: When receiving such images, users can: by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Top 15 Prettiest Icons In The History The Noodle Box Tap to learn why nude images may be harmful Block the sender Choose to view or ignore the image In case a user tries to send or forward a flagged image, Google Messages will prompt a warning about potential risks. Users must also confirm their intention before the image is being sent, adding a layer of protection against accidental sharing. Parents can also manage the feature for children via the Family Link app, ensuring that young users are shielded from inappropriate content. User Type Default Setting Can Be Changed? Managed Through Adults (18+) Off Yes Messages Settings Unsupervised Teens (13–17) On Yes Google Account Settings Supervised Accounts (Children) On No Family Link App2 This rollout is part of Google's broader push to create a safer digital messaging environment. By combining AI-powered detection with user autonomy and strict privacy safeguards, Google Messages is setting a new standard for responsible communication tools. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


News18
04-05-2025
- News18
Google Will Blur Photos With 'Containing Nudity' Label In Messages App: Here's How
Google Messages app is getting a new safety tool that looks to protect the young users and also flag people about explicit content. Google has begun rolling out a new feature in its Messages app that automatically filters photos flagged for containing nudity. Using on-device AI, the feature identifies sensitive content before a user views, sends, or forwards such media, providing clear warnings. First introduced late last year, this initiative is part of Google's broader effort to promote safer online communication. Supported by Android's SafetyCore, Messages app analyses all content locally on the device, ensuring that no image data or identifying information is sent to Google servers, as reported by 9To5Google. This approach aims to protect user privacy while helping them avoid potentially harmful situations. When the feature is turned on, photos that are suspected of being nude are automatically blurred. There is a warning message with choices like 'Learn why nude images can be harmful," 'Block this number," and a simple request that reads 'No, don't view" or 'Yes, view." After viewing the image, users can also choose to re-blur it by selecting the 'Remove preview" option. According to the report, users under the age of 18 are automatically subject to these warnings. The feature is entirely controlled through the Family Link app and cannot be disabled for supervised users, who are usually kids with parental controls. The feature is enabled by default for unsupervised teenagers between the ages of 13 and 17, however, it may be deliberately turned off through Google Messages settings. The feature is optional for adults and stays inactive unless manually activated. Additionally, the safety system steps in before users try to share or forward potentially offensive images. The sender will be prompted with a confirmation step in Google Messages if such content is identified: 'Yes, send" or 'No, don't send." The purpose is to urge users to reevaluate impulsive decisions by introducing a deliberate delay, or 'speed bump," rather than completely blocking actions. The feature's availability is still restricted, even though it was formally announced in October last year and started to roll out in phases starting in February this year. Early testing indicates that the setting, which is found under Messages > Protection & Safety > Manage sensitive content alerts, has only been shown on a small number of beta devices thus far, indicating that a wider rollout is still ongoing. First Published:


Forbes
25-04-2025
- Forbes
Samsung's Android Update—What Galaxy Owners Must Do Now
Here's what Galaxy owners must do Samsung will be delighted to put its One UI 7 rollout in the rearview mirror, and now seems to be accelerating its upgrade process after months of delays and frustrations. In addition to a raft of feature updates, the new OS brings major security and privacy improvements from both Google and Samsung, and a new, Apple-like ecosystem. After waiting so long, Galaxy owners will be keen to jump straight in. But there's a timely note of caution from SammyFans before you do: The "essential practice of downloading apps updates that you should perform after completing the installation." This won't come as a surprise — but it's easy to overlook as your phone reboots. Most of these updates will be around alignment with the new OS to ensure a seamless, bugfree experience. But there will also be security updates and patches that you need. 'A new OS update incorporates security patches and better data protection, which also improves user data stored within the app.' But while updating third-party apps is fairly obvious, you also need to update stock Google apps on your phone and, just as critically, the background Play and other services that run your phone need to be updated. Open both Google's and Samsung's stores and check for updates. You'll also find services updates there. There is a note of caution here. One recent background update that's causing controversy is Google's SafetyCore app. This was installed across Android's ecosystem some months ago without any notification or warning. It provides a content scanning capability that is run on-device and doesn't share data with Google or anyone else. Its first live application is scanning images for nudity within Google Messages, whether those are being sent or received. It's on by default for minors and off for adults. You can find details on disabling this photo scanning and uninstalling SafetyCore here. It's likely to be reinstalled with future Play Services updates though, so check regularly if you do decide you don't want it on your phone. Per SammyFans, here are step-by-step instructions on updating your apps: "Google Play Galaxy Store Just as you finally get Android 15, its successor Android 16 has reached Beta 4. This is due for a stable release in the summer, likely July, and the question now is how much longer will Samsung owners wait behind Pixels for the upgrade. If you have a Galaxy S25 you should certainly expect a fast update. That has now become critical.


India Today
22-04-2025
- India Today
Google Messages now blurs nudity by default with on-device AI
Google has begun rolling out a new feature in its Messages app that automatically blurs images flagged as containing nudity. The feature, first announced late last year, uses on-device AI to detect sensitive content and issue clear warnings before a user can view, send, or forward such media. The sensitive content warning system is part of Google's broader initiative to promote safer digital communication. Backed by the Android System's SafetyCore, the technology is designed so that all content analysis happens locally on the device — meaning no image data or identification information is sent to Google servers. This aims to protect users' privacy while helping them navigate risky the feature enabled, images flagged as possibly containing nudity are automatically blurred. A warning message appears with options such as "Learn why nude images can be harmful," "Block this number," and a clear prompt — "No, don't view" or "Yes, view." Users can also choose to re-blur the image after viewing by tapping a "Remove preview" states that these warnings are turned on by default for users under 18. For supervised users — typically children with parental controls — the feature cannot be turned off and is fully managed through the Family Link app. For unsupervised teens aged 13 to 17, the feature is enabled by default but can be manually disabled via Google Messages settings. For adults, the feature is opt-in and remains off unless manually The safety system also intervenes before users attempt to send or forward potentially explicit images. If such content is detected, Google Messages will prompt the sender with a confirmation step: "Yes, send" or "No, don't send." The idea isn't to block actions entirely but to introduce a thoughtful pause — a "speed bump" — to encourage users to reconsider impulsive of now, the feature is limited to image-based content and does not apply to videos. It also works only when the image is shared through Google Messages with sensitive content warnings turned on. Other apps must explicitly integrate with SafetyCore for similar the feature was officially announced in October and began rolling out in phases from February, its availability remains limited. According to early tests, the setting — located under Messages > Protection & Safety > Manage sensitive content warnings — has only appeared on a few beta devices so far, suggesting a broader rollout is still in progress.