Someone Might Be Watching You Through Your Home Security Camera. Here's What To Know.
You've probably heard stories of people and companies listening to your conversations through smart speakers or smartphones. (Yes, this really happens.) Maybe you've even heard about nightmare scenarios where people get access to a home camera system, which is a growing concern as in-home cameras from any number of brands — Blink, SimpliSafe, Ring, Nest, and Arlo — grow in popularity.
Folks use these cameras in their homes to check in on kids, seniors, and pets. They also use them to monitor their homes for threats. Unfortunately, hackers can gain access to this footage, too.
When it comes to hackers getting into your home camera systems, there are two possible scenarios: Someone accesses the live feed and watches you in real time, or someone hacks into your recorded videos from the past weeks, months or years, experts told HuffPost.
How does this happen? And what can you do to protect yourself? Here's what security experts say:
If you don't change the default password after buying your camera, it's more subject to hacking.
Most people's home security cameras won't get hacked, but it does happen in certain scenarios.
And, typically, when you talk about getting into in-home cameras, it's because of a bad password, said Dave Chronister, the CEO of Parameter Security.
First, if you use a default password for your home camera, it's more vulnerable to nefarious activity, said Alex Hamerstone, the advisory solutions director for TrustedSec, an ethical hacking company. Your camera generally comes with a default username and password when you buy it, he explained.
'And hackers know, and scammers know, the default username and password so they can log into cameras if you have not changed the default username and password,' Hamerstone said.
'There's a site called Shodan that people oftentimes share links to, and what Shodan does is it goes on and looks for items with default passwords,' he added. 'So, whether it's a webcam or a router or something like that... it posts the footage from the cameras.'
Hackers can also scan for default passwords associated with cameras to get into your system, Chronister said.
'We have found this in a lot of areas,' he said. Particularly, hackers will steal illicit videos of people in their homes and go on to share the footage on public websites and adult websites.
If you are the target of a scam, you may also be at risk.
Your camera could also be compromised if someone is targeting you or you're a victim of a phishing scandal, said Hamerstone.
'If I'm a specific target, somebody may try to phish me and find out my passwords whether it's for the camera or something else,' he said.
'And then they may, accurately, guess that someone's using the same password for all the different accounts, including cameras, and then log in that way,' he added. Then, they can gain access to your feed and any saved videos.
In most cases, if you are being actively watched, the camera light should turn on.
According to Chronister, the light that flashes on when you turn on a camera should turn on whenever that camera is in use, even when a hacker is in control. But, there may be camera manufacturers out there that don't have this feature — or, there could be a way to disable the light from turning on, he added.
If you see your camera light turn on when you aren't actively using it, that's clearly a red flag — but isn't the only sign that someone is in your system.
Bad actors can also get into your camera's files and look back through your recorded videos, which may not cause the light to switch on.
There are a few things you can do to protect yourself:
While you may never be victim to camera hacking, the risk is still there. 'My big thing is I just want people to be aware of the risk and then make an informed choice,' Hamerstone said.
While having cameras in your house does mean someone could access your videos, there are a few things you can do to bolster your security.
'Number one, the biggest thing is disable any default passwords,' said Chronister. This is true for cameras and with anything else you buy, Hamerstone added.
'Hackers are lazy. If there are 10,000 cameras out there and 9,000 have the default password, they're not going to try to crack the other 1,000,' noted Chronister.
Don't use your go-to password for your camera system. Really, you should never reuse old passwords, said Hamerstone, but especially for crucial things like cameras and banking.
'For most of us, if we have cameras we can go on our phone and log into that account and see [the footage or live feed],' Hamerstone said.
You should turn on multi-factor authentication, like receiving a text code, when logging into the camera app. This makes the app more secure.
Chronister said it's also important to keep your camera systems up-to-date.
'They are mini-computers. There should be an ability to update them to new firmware when needed,' Chronister said.
Software can have vulnerabilities that require system updates to address the vulnerability and keep your account secure.
If you have a camera in your home that has been installed yet unused for years and years, take it down, said Chronister.
These cameras are old and outdated, which can open them up to security threats, he noted. Old cameras and old software don't have the same safety protections as newer alternatives.
And make sure you don't have your cameras in the more private spaces in your home.
It may sound like an oxymoron, but if you are going to have in-home cameras, keep them in the more public spaces in your home, said Hamerstone.
'So, maybe keep them in the kitchen, but not your bedroom or something like that,' he said.
It may go without saying, but you should also keep cameras out of your bathroom, Chronister said. The risk outweighs the benefit when it comes to having cameras in your bedroom and bathroom, he added.
If someone does access your in-home camera footage, you don't want it to be recordings of your most private moments. Someone watches you walk in your front door? Fine. Someone watches you have sex with your partner? Not fine.
If you do install in-home cameras to watch loved ones or monitor your security, be sure to do it in the spaces that won't lead to problems if a bad actor eventually gets their hands on the footage.This article originally appeared on HuffPost.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

CNET
24 minutes ago
- CNET
What Worries Americans About AI? Politics, Jobs and Friends
Americans have a lot of worries about artificial intelligence. Like job losses and energy use. Even more so: political chaos. All of that is a lot to blame on one new technology that was an afterthought to most people just a few years ago. Generative AI, in the few years since ChatGPT burst onto the scene, has become so ubiquitous in our lives that people have strong opinions about what it means and what it can do. A Reuters/Ipsos poll conducted Aug. 13-18 and released Tuesday dug into some of those specific concerns. It focused on the worries people had about the technology, and the general public has often had a negative perception. In this survey, 47% of respondents said they believe AI is bad for humanity, compared with 31% who disagreed with that statement. Compare those results with a Pew Research Center survey, released in April, that found 35% of the public believed AI would have a negative impact on the US, versus 17% who believed it would be positive. That sentiment flipped when Pew asked AI experts the same question. The experts were more optimistic: 56% said they expected a positive impact, and only 15% expected a negative one. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. The Reuters/Ipsos poll specifically highlights some of the immediate, tangible concerns many people have with the rapid expansion of generative AI technology, along with the less-specific fears about runaway robot intelligence. The numbers indicate more concern than comfort with those bigger-picture, long-term questions, like whether AI poses a risk to the future of humankind (58% agree, 20% disagree). But even larger portions of the American public are worried about more immediate issues. Foremost among those immediate issues is the potential that AI will disrupt political systems, with 77% of those polled saying they were concerned. AI tools, particularly image and video generators, have the potential to create distorting or manipulative content (known as deepfakes) that can mislead voters or undermine trust in political information, particularly on social media. Most Americans, at 71%, said they were concerned AI would cause too many people to lose jobs. The impact of AI on the workforce is expected to be significant, with some companies already talking about being "AI-first." AI developers and business leaders tout the technology's ability to make workers more efficient. But other polls have also shown how common fears of job loss are. The April Pew survey found 64% of Americans and 39% of AI experts thought there would be fewer jobs in the US in 20 years because of AI. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts But the Reuters/Ipsos poll also noted two other worries that have become more mainstream: the effect of AI on personal relationships and energy consumption. Two-thirds of respondents in the poll said they were concerned about AI's use as a replacement for in-person relationships. Generative AI's human-like tone (which comes from the fact that it was trained on, and therefore replicates, stuff written by humans) has led many users to treat chatbots and characters as if they were, well, actual friends. This is widespread enough that OpenAI, when it rolled out the new GPT-5 model this month, had to bring back an older model that had a more conversational tone because users felt like they'd lost a friend. Even OpenAI CEO Sam Altman acknowledged that users treating AI as a kind of therapist or life coach made him "uneasy." The energy demands of AI are also significant and a concern for 61% of Americans surveyed. The demand comes from the massive amounts of computing power required to train and run large language models like OpenAI's ChatGPT and Google's Gemini. The data centers that house these computers are like giant AI factories, and they're taking up space, electricity and water in a growing number of places.

Geek Wire
24 minutes ago
- Geek Wire
Protesters set up temporary encampment at Microsoft to pressure company on Israel contracts
Redmond police explain to members of No Azure for Apartheid that they will be subject to arrest for trespassing if they don't leave the Microsoft campus Tuesday afternoon. (GeekWire Photo / Todd Bishop) Protesters took over part of Microsoft's Redmond, Wash., campus for more than an hour Tuesday — escalating their efforts to pressure the company to stop providing technology to the Israeli government and military. Members of the No Azure for Apartheid group, which includes current and former Microsoft employees, allege that the company's technology is being used in the surveillance, starvation and killing of Palestinians in Gaza. Microsoft has said it complies with its human rights commitments, and that its contracts with Israel's Ministry of Defense are standard commercial agreements, governed by its terms of service and AI Code of Conduct. The protesters created what they called a 'Liberated Zone' in the plaza of the company's East Campus, in front of the Microsoft sign, pitching tents and setting up a 'negotiations table' with a banner that read, 'Microsoft Execs, Come to the Table.' The space was filled with shrouds symbolizing the dead in Gaza, and a large plate reading, 'Stop Starving Gaza.' 'Regardless of your beliefs, regardless of whether or not your work today actually supports Israel, know that Microsoft's money does include blood money,' said one of the speakers at the protest, Julius Shan, a software engineer who said he has worked at Microsoft for nearly five years. A smattering of Microsoft employees looked on from the periphery as security guards and police arrived. One employee who was watching nearby said he empathized with the message but not the approach. 'This isn't going to change anything,' said the employee, who did not give his name. 'It's just going to annoy the people that are here.' The 'Liberated Zone' was in place for more than an hour on Microsoft's Redmond campus. Initially, the protesters appeared to be preparing for an extended encampment, reminiscent of student-led protests at universities during the 2023-24 academic year. In a press release shortly after the action began, the group said it was renaming the plaza 'The Martyred Palestinian Children's Plaza,' and offered interviews at the 'Anas Al-Sharif Media Tent,' named after a Palestinian journalist killed in Gaza last week. However, the group took down the encampment and moved to a nearby public sidewalk after City of Redmond Police told them they were trespassing on private property and would be subject to arrest. Microsoft has not issued a public statement about the latest protest. The company has said in past statements that it does not permit its technology to be used for unlawful surveillance or human rights abuses. Earlier this year, Microsoft said internal and external reviews found no violations. However, the company acknowledged its limited visibility into how its technology is deployed on private or on-premises systems Microsoft announced on Aug.15 that it is launching a formal review into allegations that its Azure cloud services were used by the Israeli military for mass surveillance of Palestinians. The company said it hired the law firm Covington & Burling LLP to lead the review, and pledged to make the findings of the review public once the process is complete. That move came after reports earlier this month in The Guardian, +972 Magazine, and Local Call cited sources alleging that the Israeli Defense Forces stored data from widespread phone monitoring of civilians in Gaza and the West Bank on Microsoft servers. Protesters walk past a security robot as they leave the Microsoft campus. (GeekWire Photo / Todd Bishop) At the time of the initial report by the Guardian, the company explained that its work with an Israeli intelligence unit focused on cybersecurity and that it was not aware of any civilian surveillance. The encampment Tuesday was part of a broader escalation by the group, which has spent much of the past year staging protests inside and outside company events, including Microsoft's 50th anniversary event in Redmond earlier this year, where protesters interrupted executives inside the event. Earlier this month, the group staged a loud protest outside the home of Teresa Hutson, corporate vice president of Microsoft's Trusted Technology Group — pouring red paint on the sidewalk symbolizing blood, and accusing her of being complicit in killing children in Gaza, according to video clips from the protest. The group's latest action comes amid reports of a growing humanitarian crisis in Gaza, and increasing concerns about famine. Hamas has agreed to a proposed 60-day cease-fire with phased hostage releases. Israeli officials today demanded the release of all hostages and said they were not interested in partial deals, according to the BBC.
Yahoo
an hour ago
- Yahoo
XtalPi Signs MOU with Dong-A ST for Joint Research and Development of Immunology and Inflammation Therapies
CAMBRIDGE, Mass., Aug. 19, 2025 /PRNewswire/ -- XtalPi announced on the 20th that it signed a Memorandum of Understanding (MOU) with Korea's leading pharmaceutical company Dong-A ST, to jointly develop therapeutics for immunological and inflammatory diseases. This collaboration will be based on XtalPi's intelligent and automated drug discovery platform, which integrates artificial intelligence (AI), quantum physics, and large-scale automated robotic experiments. The two companies plan to co-identify targets and discover first-in-class or best-in-class drug candidates using XtalPi's proprietary AI-driven drug discovery platform. The XtalPi platform combines the speed and generative power of AI with the accuracy of its robotic lab-in-the-loop to accelerate drug discovery and vastly expand the explorable chemical space. This integrated workflow spans deep-learning-based molecule design, quantum physics and molecular dynamics simulations for predicting drug-target interactions, automated chemical synthesis, and experimental validation of candidate compounds' key pharmaceutical properties. Leveraging its expertise in immunology and inflammation as well as its experience in small molecule drug development, Dong-A ST will actively participate throughout the entire R&D process—including candidate validation, efficacy and safety testing, and the formulation of preclinical and clinical development strategies. The company also plans to explore strategies for pipeline expansion and assess commercialization potential. Through this partnership, Dong-A ST aims to strengthen its pipeline in the immunology and inflammation space and expand its R&D scope beyond small molecule therapeutics into areas such as targeted protein degradation (TPD), biologics, antibody-drug conjugates (ADC), and gene therapies. John Wang, Senior Vice President of Drug Discovery at XtalPi, stated: "The combination of Dong-A ST's extensive expertise and XtalPi's proven AI-robotics platform is well-positioned to translate scientific innovation into competitive precision medicines. Together, we aim to rapidly discover and rigorously validate novel drug candidates across multiple modalities to unlock unique market opportunities, and deliver transformative therapies for global patients." Jae-Hong Park, Head of R&D at Dong-A ST, remarked, "This collaboration marks a pivotal step in expanding Dong-A ST's R&D capabilities," adding, "By leveraging synergies with XtalPi's AI platform, we expect to accelerate the development of next-generation treatments for immune and inflammatory diseases." Meanwhile, both Dong-A ST and XtalPi operate open innovation offices in Boston, USA. This geographic proximity will facilitate closer and more efficient collaboration throughout the drug discovery process. About Dong-A ST Dong-A ST Co., Ltd. ( was established in 1932 and is a leading Korean pharmaceutical company engaged in the development, manufacturing, and commercialization of ethical drugs, biosimilars, medical devices, and diagnostics. Its key products include Stillen (gastritis), Zydena (erectile dysfunction), Motilitone (functional dyspepsia), and Imuldosa (ustekinuma biosimilar). Leveraging strong R&D capabilities and a history of innovation, Dong-A ST is expanding its pipeline in immunology, inflammation, neurological disease, metabolic diseases, and oncology, while advancing into new modalities such as biologics, ADCs, TPD, and gene therapies. About XtalPi XtalPi Holdings Limited (XtalPi, was founded in 2015 by three physicists from the Massachusetts Institute of Technology (MIT). It is an innovative R&D platform powered by quantum physics, artificial intelligence, and robotics. By integrating first-principles calculations, AI algorithms, high-performance cloud computing, and standardized automation systems, XtalPi provides digital and intelligent R&D solutions for companies in the pharmaceutical, materials science, agricultural technology, energy, new chemicals, and cosmetics industries. View original content to download multimedia: SOURCE XtalPi Inc. Sign in to access your portfolio



