Samsung plans to turn SmartThings devices into motion sensors – but I'm not convinced
When you buy through links on our articles, Future and its syndication partners may earn a commission.
QUICK SUMMARY
Samsung has announced plans to turn its SmartThings appliances into motion sensors.
With Home AI, Samsung will add ambient sensing to its SmartThings devices to learn your habits and automate activities.
During the Samsung Galaxy Unpacked event, Samsung announced plans to turn its SmartThings appliances into motion sensors. With a continued focus on 'AI for All', the SmartThings ecosystem will monitor and gather insights into users' daily habits and automate activities to create a more efficient smart home experience.
Samsung Galaxy Unpacked took place last week, and while the event was heavily focused on its new smartphone line-up, Samsung SmartThings users also got some exciting updates. During the event, Samsung announced its Home AI feature, and how it will add ambient sensing to its SmartThings devices and appliances.
For those new to the concept, ambient sensing is a type of technology that uses sensors to detect and gather information within a specific environment. For example, using ambient sensing in your home means your devices can monitor light, temperature, movement and other habits and adjust accordingly based on these factors.
With Home AI, Samsung plans for its SmartThings platform to monitor your daily habits and activities to better understand your needs. From there, SmartThings can adapt to your lifestyle and make changes to your surroundings so your smart home is more streamlined and you don't have to rely on voice assistants or apps as much.
The SmartThings devices expected to act as motion sensors are its TVs, speakers and refrigerators, as these appliances already have a host of smart features. Acting as motion and sound sensors, these SmartThings devices can gather data and adjust to help you, for example, your TV can use ambient sensing to track your movements and tell you to stand up if you've been sitting for too long.
Alongside Home AI, Samsung has also announced a new version of its Map View which now uses generative AI to create a personalised 3D map of your home. Users can now take photos on your phone so SmartThings can better understand your rooms and make automations more personal and natural.
The concept of the SmartThings devices as sensors is definitely interesting, although I'm not completely convinced by it. While it's handy to have devices around your house that know your routines, I wonder how invasive it might get and how often the devices will make adjustments that maybe aren't needed.
One thing that will make users happy is the privacy controls as Home AI is stored locally on your SmartThings hub and network. Home AI is expected to roll out 2025 and potentially in 2026 so Samsung has plenty of time to work on these updates. For more details, see how to start a smart home with Samsung SmartThings.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
20 minutes ago
- Yahoo
We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic
We still have no idea why an AI model picks one phrase over another, Anthropic Chief Executive Dario Amodei said in an April essay—an admission that's pushing the company to build an 'MRI for AI' and finally decode how these black-box systems actually work. Amodei published the blog post on his personal website, warning that the lack of transparency is "essentially unprecedented in the history of technology." His call to action? Create tools that make AI decisions traceable—before it's too late. Don't Miss: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. Hasbro, MGM, and Skechers trust this AI marketing firm — When a language model summarizes a financial report, recommends a treatment, or writes a poem, researchers still can't explain why it made certain choices, according to Amodei,. We have no idea why it makes certain choices—and that is precisely the problem. This interpretability gap blocks AI from being trusted in areas like healthcare and defense. The post, 'The Urgency of Interpretability,' compares today's AI progress to past tech revolutions—but without the benefit of reliable engineering models. Amodei argued that artificial general intelligence will arrive by 2026 or 2027, as some predict, "we need a microscope into these models now." Anthropic has already started prototyping that microscope. In a technical report, the company deliberately embedded a misalignment into one of its models—essentially a secret instruction to behave incorrectly—and challenged internal teams to detect the issue. Trending: According to the company, three of four "blue teams" found the planted flaw. Some used neural dashboards and interpretability tools to do it, suggesting real-time AI audits could soon be possible. That experiment showed early success in catching misbehavior before it hits end users—a huge leap for safety. Mechanistic interpretability is having a breakout moment. According to a March 11 research paper from Harvard's Kempner Institute, mapping AI neurons to functions is accelerating with help from neuroscience-inspired tools. Interpretability pioneer Chris Olah and others argue that making models transparent is essential before AGI becomes a reality. Meanwhile, Washington is boosting oversight. The National Institute of Standards and Technology requested $47.7 million in its fiscal 2025 budget to expand the U.S. AI Safety capital is pouring into this frontier. In 2024, Amazon (NASDAQ:AMZN) finalized a $4 billion investment in Anthropic. The deal made Amazon Web Services the startup's primary cloud provider and granted its enterprise clients early access to Claude models. AWS now underwrites much of the compute needed for these deep diagnostics—and investors want more than raw performance. As risks grow, the demand for explainable AI is no longer academic. Transparency, it turns out, might just be the killer feature. Read Next: Deloitte's fastest-growing software company partners with Amazon, Walmart & Target – Image: Shutterstock Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? (AMZN): Free Stock Analysis Report This article We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
20 minutes ago
- Yahoo
Google quietly released an app that lets you download and run AI models locally
Last week, Google quietly released an app that lets users run a range of openly available AI models from the AI dev platform Hugging Face on their phones. Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into supported phones' processors. AI models running in the cloud are often more powerful than their local counterparts, but they also have their downsides. Some users might be wary of sending personal or sensitive data to a remote data center, or want to have models available without needing to find a Wi-Fi or cellular connection. Google AI Edge Gallery, which Google is calling an "experimental Alpha release," can be downloaded from GitHub by following these instructions. The home screen shows shortcuts to AI tasks and capabilities like "Ask Image" and "AI Chat." Tapping on a capability pulls up a list of models suited for the task, such as Google's Gemma 3n. Google AI Edge Gallery also provides a "Prompt Lab" users can use to kick off "single-turn" tasks powered by models, like summarizing and rewriting text. The Prompt Lab comes with several task templates and configurable settings to fine-tune the models' behaviors. Your mileage may vary in terms of performance, Google warns. Modern devices with more powerful hardware will predictably run models faster, but the model size also matters. Larger models will take more time to complete a task — say, answering a question about an image — than smaller models. Google's inviting members of the developer community to give feedback on the Google AI Edge Gallery experience. The app is under an Apache 2.0 license, meaning it can be used in most contexts — commercial or otherwise — without restriction. This article originally appeared on TechCrunch at


Geek Wire
an hour ago
- Geek Wire
GeekWire Podcast: Microsoft, Remitly, and the new shape of work — plus, Amazon's NYT AI deal
A playful nod to classic computing on Microsoft's new campus: A vintage computer mouse emerges from a faux mouse hole in the wall — a bit of tech humor tucked into the modern workspace. (GeekWire Photo / Kurt Schlosser) This week on the GeekWire Podcast, we discuss Amazon's new licensing agreement with The New York Times to train its AI platforms, a notable move in the evolving relationship between media and tech. We also go behind the scenes at two very different office spaces that reflect changing approaches to the workplace: Microsoft's sprawling and still-developing Redmond campus, and Remitly's globally inspired new headquarters in downtown Seattle. We start the show on a lighter note, with a confession about computer mouse loyalty and a debate over whether a trackpad is good enough in a pinch. Listen to the full episode below or wherever you get your podcasts. Related stories: With GeekWire co-founder Todd Bishop and reporter Kurt Schlosser. Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.