Latest news with #LilyHayNewman


WIRED
03-05-2025
- Business
- WIRED
Hacking Spree Hits UK Retail Giants
Matt Burgess Lily Hay Newman Dhruv Mehrotra May 3, 2025 6:30 AM Plus: France blames Russia for a series of cyberattacks, the US is taking steps to crack down on a gray market allegedly used by scammers, and Microsoft pushes the password one step closer to death. Researchers unveiled a cluster of vulnerabilities in Apple's wireless media streaming platform AirPlay this week that leave millions of third-party devices like speakers and TVs vulnerable to takeover if an attacker is on the same Wi-Fi network as the victim gadget. These 'AirBorne' vulnerabilities have all been patched—including some that potentially impacted Apple's Mac computers—but, in practice, third-party devices may not all get fixes, and even if they do, patch adoption could be low. Records reviewed by WIRED show that utilizing car subscription features can substantially raise your risk of being subjected to government surveillance, because such services generate troves of data that are valuable to law enforcement. WIRED also did a deep dive on North Korea's yearslong campaign to place IT workers inside companies in North American, the United Kingdom, and Europe. The schemes are more effective than ever as scammers incorporate AI into their workflows. WhatsApp designed a special cloud processing platform called Private Processing to allow new AI tools to work in the secure messenger without compromising its end-to-end encryption. Experts warn, though, that it could create enticing targets for hackers. And we have a guide for navigating the privacy risks of using ChatGPT's new image generator to do seemingly fun and innocuous projects like making an action figure version of yourself. But wait, there's more! Each week, we round up the security and privacy news we didn't cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there. Three British Retailers Hacked in Spate of Cyberattacks Three separate retailers in the UK—including the supermarket Co-op and thedepartment stores Marks & Spencer and Harrods—have all revealed they have recently been subject to cyberattacks, with the intrusions and widespread impact seemingly ongoing. Toward the end of April, Marks & Spencer revealed it had been the victim of a 'cyber incident.' Over the following two weeks, it has been forced to pause online orders within its apps, some food has been missing from its shelves, and it has paused recruitment and other 'normal processes.' Staff at Co-op have been told to keep webcams turned on during remote meetings and check who is attending calls, after shutting down parts of its IT systems in response to its own hack. Harrods, meanwhile, told customers to 'not do anything differently at this point.' At the time of writing, none of the retailers have detailed the specific nature of the cyberattacks or the full scale of the impacts. It is also unclear if the attacks are linked. Bloomberg has reported a ransomware cartel dubbed DragonForce has claimed it and its partners were behind the attacks. The so-called cartel provides 'infrastructure and tools' to hackers but 'doesn't require affiliates to deploy its ransomware,' according to research from security firm Secureworks. The hacked companies did not respond to Bloomberg about the claims. Bleeping Computer originally reported that the threat actors known as Scattered Spider were allegedly behind the attack on Marks & Spencer. The publication reported that the company's servers were encrypted by ransomware, with the intrusion beginning as early as February. The attribution to Scattered Spider has not been confirmed by Marks & Spencer. Over the past two years, Scattered Spider has emerged as one of the most prolific and dangerous sets of hackers currently operating. The threat actors are not a well-defined group of hackers. Instead, they're more a loose collective that uses social engineering—such as phishing and voice calls—to gain initial access into company networks. Scattered Spider members are often English-speaking, teenaged, and can be members of the heinous criminal group the Com. The hackers have been active since June 2022 and have targeted more than 100 companies—including the high-profile hacks on Caesar's Entertainment and MGM Resorts in 2023. France (Finally) Names Russian Hackers for the First Time French authorities have condemned Russia's military intelligence agency, accusing it of orchestrating a series of high-profile cyberattacks—including the hacking of Emmanuel Macron's 2017 presidential campaign, a brazen 2015 assault on the TV channel TV5 Monde, and recent intrusion attempts targeting organizations involved in preparing the 2024 Paris Olympic Games. French authorities have also disclosed the name and location of a GRU unit tied to the notorious hacking group APT28—information that had never before been officially released. Unit 20728 is based in the southern Russian city of Rostov-on-Don and operates out of the "166th Information Research Center." This marks the first time French officials have publicly assigned blame to a foreign intelligence service following an internal attribution process. The timing is significant, coming as Paris positions itself at the forefront of Europe's support for Ukraine. US Moves to Crack Down on 'Largest Illicit Marketplace' The Trump administration has taken the first step toward blacklisting a Cambodian financial conglomerate at the center of a global money laundering network. On Thursday, the Treasury Department designated Huione Group as a money-laundering operation, alleging that the company and its affiliates have laundered more than $4 billion for criminals, including North Korean hackers and online scammers. These scammers—who defraud victims through bogus investments and other schemes—rely on Huione and its affiliates to move funds abroad to evade both law enforcement and anti-money-laundering systems. The proposed action represents the most significant effort yet to crack down on Huione, which is tied to what experts believe to be the 'largest illicit marketplace': Huione Guarantee. According to WIRED's January report, the marketplace has likely facilitated over $24 billion in gray-market transactions. Experts believe the platform operates as a one-stop shop for scammers, offering everything from victim contact lists and deepfake tools to fake investment websites and other illicit services. New Microsoft Accounts Won't Need Passwords Anymore Slowly but surely, the password is dying. Over the past two years, passkeys—a stronger method of authentication that doesn't require you to remember or use a password—have become more common. The rollout of the technology has been piecemeal, but big tech companies have worked for years to create the alternative, which is more secure than passwords. This week, Microsoft announced that people setting up new accounts with the company won't have to create passwords at all. 'New Microsoft accounts will now be 'passwordless by default,'' the company wrote in a blog post. Microsoft is also pushing people further away from passwords and will 'detect' the best way for people to lo in to their accounts if they have set up alternatives to passwords.


WIRED
29-04-2025
- WIRED
Millions of Apple Airplay-Enabled Devices Can Be Hacked via Wi-Fi
Lily Hay Newman Andy Greenberg Apr 29, 2025 8:30 AM Researchers reveal a collection of bugs known as AirBorne that would allow any hacker on the same Wi-Fi network as a third-party AirPlay-enabled device to surreptitiously run their own code on it. Illustration:Apple's AirPlay feature enables iPhones and Macbooks to seamlessly play music or show photos and videos on other Apple devices or third-party speakers and TVs that integrate the protocol. Now newly uncovered security flaws in AirPlay mean that those same wireless connections could allow hackers to move within a network just as easily, spreading malicious code from one infected device to another. Apple products are known for regularly receiving fixes, but given how rarely some smart-home devices are patched, it's likely that these wirelessly enabled footholds for malware, across many of the hundreds of models of AirPlay-enabled devices, will persist for years to come. On Tuesday, researchers from the cybersecurity firm Oligo revealed what they're calling AirBorne, a collection of vulnerabilities affecting AirPlay, Apple's proprietary radio-based protocol for local wireless communication. Bugs in Apple's AirPlay software development kit (SDK) for third-party devices would allow hackers to hijack gadgets like speakers, receivers, set-top boxes, or smart TVs if they're on the same Wi-Fi network as the hacker's machine. Another set of AirBorne vulnerabilities would have allowed hackers to exploit AirPlay-enabled Apple devices too, Apple told Oligo, though these bugs have been patched in updates over the last several months, and Apple tells WIRED that those bugs could have only been exploited when users changed default AirPlay settings. Those Apple devices aside, Oligo's chief technology officer and cofounder, Gal Elbaz, estimates that potentially vulnerable third-party AirPlay-enabled devices number in the tens of millions. 'Because AirPlay is supported in such a wide variety of devices, there are a lot that will take years to patch—or they will never be patched,' Elbaz says. 'And it's all because of vulnerabilities in one piece of software that affects everything.' Despite Oligo working with Apple for months to patch the AirBorne bugs in all affected devices, the Tel-Aviv-based security firm warns that the AirBorne vulnerabilities in many third-party gadgets are likely to remain hackable unless users act to update them. If a hacker can get onto the same Wi-Fi network as those vulnerable devices—whether by hacking into another computer on a home or corporate network or by simply connecting to the same coffeeshop or airport Wi-Fi—they can surreptitiously take over these gadgets. From there, they could use this control to maintain a stealthy point of access, hack other targets on the network, or add the machines to a botnet of infected, coordinated machines under the hacker's control. Oligo also notes that many of the vulnerable devices have microphones and could be turned into listening devices for espionage. The researchers did not go so far as to create proof-of-concept malware for any particular target that would demonstrate that trick. Oligo says it warned Apple about its AirBorne findings in the late fall and winter of last year, and Apple responded in the months since then by pushing out security updates. The researchers collaborated with Apple to test and validate the fixes for Macs and other Apple products. Apple tells WIRED that it has also created patches that are available for impacted third-party devices. The company emphasizes, though, that there are limitations to the attacks that would be possible on AirPlay-enabled devices as a result of the bugs, because an attacker must be on the same Wi-Fi network as a target to exploit them. Apple adds that while there is potentially some user data on devices like TVs and speakers, it is typically very limited. Below is a video of the Oligo researchers demonstrating their AirBorne hacking technique to take over an AirPlay-enabled Bose speaker to show their company's logo. (The researchers say they didn't intend to single out Bose, but just happened to have one of the company's speakers on hand for testing.) Bose did not immediately respond to WIRED's request for comment. The AirBorne vulnerabilities Oligo found also affect CarPlay, the radio protocol used to connect to vehicles' dashboard interfaces. Oligo warns that this means hackers could hijack a car's automotive computer, known as its head unit, in any of more than 800 CarPlay-enabled car and truck models. In those car-specific cases, though, the AirBorne vulnerabilities could only be exploited if the hacker is able to pair their own device with the head unit via Bluetooth or a USB connection, which drastically restricts the threat of CarPlay-based vehicle hacking. The AirPlay SDK flaws in home media devices, by contrast, may present a more practical vulnerability for hackers seeking to hide on a network, whether to install ransomware or carry out stealthy espionage, all while hiding on devices that are often forgotten by both consumers and corporate or government network defenders. 'The amount of devices that were vulnerable to these issues, that's what alarms me,' says Oligo researcher Uri Katz. 'When was the last time you updated your speaker?' The researchers originally started thinking about this property of AirPlay, and ultimately discovered the AirBorne vulnerabilities, while working on a different project analyzing vulnerabilities that could allow an attacker to access internal services running on a target's local network from a malicious website. In that earlier research, Oligo's hackers found they could defeat the fundamental protections baked into every web browser that are meant to prevent websites from having this type of invasive access on other people's internal networks. While playing around with their discovery, the researchers realized that one of the services they could access by exploiting the bugs without authorization on a target's systems was AirPlay. The crop of AirBorne vulnerabilities revealed today is unconnected to the previous work, but was inspired by AirPlay's properties as a service built to sit open and at the ready for new connections. And the fact that the researchers found flaws in the AirPlay SDK means that vulnerabilities are lurking in hundreds of models of devices—and possibly more, given that some manufacturers incorporate the AirPlay SDK without notifying Apple and becoming 'certified' AirPlay devices. 'When third-party manufacturers integrate Apple technologies like AirPlay via an SDK, obviously Apple no longer has direct control over the hardware or the patching process,' says Patrick Wardle, CEO of the Apple device-focused security firm DoubleYou. 'As a result, when vulnerabilities arise and third-party vendors fail to update their products promptly—or at all—it not only puts users at risk but could also erode trust in the broader Apple ecosystem."


WIRED
21-04-2025
- WIRED
How to Protect Yourself From Phone Searches at the US Border
Lily Hay Newman Matt Burgess Apr 21, 2025 6:30 AM Custom and Border Protection has broad authority to search travelers' devices when they cross into the United States. Here's what you can do to protect your digital life while at the US border. Photo-Illustration:Entering the United States has become more precarious since the start of the second Trump administration in January. There has been an apparent surge in both foreign visitors and US visa holders being detained, questioned, and even deported at the border. As the situation evolves, demand for flights from Canada and Europe has plummeted as people reevaluate their travel plans. Many people, though, can't avoid border crossings, whether they are returning home after traveling for work or visiting friends and family abroad. Regardless of the reason for travel, US Customs and Border Protection (CBP) officials have the authority to search people's phones and other devices as they determine who is allowed to enter the country. Multiple travelers have reported being questioned or turned away at the US border in recent weeks in relation to content on their phones. While not unique to the US border—other nations also have powers to inspect phones—the increasingly volatile nature of the Trump administration's border policies is causing people to rethink the risks of carrying devices packed with personal information to and from the US. Canadian authorities have updated travel guidance to warn of phone searches and seizures, some corporate executives are reconsidering the devices they carry, some officials in Europe continue to receive burner phones for certain trips to the US, and the Committee to Protect Journalists has warned foreign reporters about device searches at the US border. With this in mind, here's the WIRED guide to planning for bringing a smartphone across the border. You should also use WIRED's guide to entering the US with your digital privacy intact to get a broader view of how to minimize data and take precautions. But start here for everything smartphone. What Can CBP Access? Do CBP officials have the authority to search your phone at the border? The short answer is yes. Searches are either manual, with a border official looking through the device, or more advanced, involving forensic tools to extract data en masse. To get into your phone, border officials can ask for your PIN or biometric to unlock the phone. However, your legal status and right to enter the US will make a difference in what a search might look like at the border. Generally, border zones—which includes US international airports—fall outside of Fourth Amendment protections that require a warrant for a device to be searched (though one federal court has ruled otherwise). As such, CBP has the power to search any traveler's phone or other electronic devices, such as computers and cameras, when they're entering the country. US citizens and green card holders can refuse a device search without being denied entry, but they may face additional questioning or temporary device seizure. And as the Trump administration pushes the norms of acceptable government conduct, it is possible that, in practice, green card holders could face new repercussions for declining a device search. US visa holders and foreign visitors can face detention and deportation for refusing a device search. 'Not everybody has the same risk profile,' says Molly Rose Freeman Cyr, a member of Amnesty International's Security Lab. 'A person's legal status, the social media accounts that they use, the messaging apps that they use, and the contents of their chats' should all factor into their risk calculus and the decisions they make about border crossings, Cyr says. If you feel safe refusing a search, make sure to disable biometrics used to unlock your device, like face or fingerprint scanners, which CBP officers can use to access your device. Instead, use only a PIN or an alphanumeric code (if available on your device). Make sure to keep your phone's operating system up to date, which can make it hard to crack with forensic tools. You should also consider factors like nationality, citizenship, profession, and geopolitical views in assessing whether you or someone you're traveling with could be at higher risk of scrutiny during border crossings. In short, you need to make some decisions before you travel about whether you would be prepared to refuse a device search and whether you want to make changes to your devices before your trips. Keep in mind that there are simple steps anyone can take to keep your devices out of sight and, hopefully, out of mind during border crossings. It's always a good idea to obtain a printed boarding pass or prepare other paper documents for review and then turn your phone off and store it in your bag before you approach a CBP agent. Traveling With an Alternate Phone There are two ways to approach device privacy for border crossings. One is to start with a clean slate, purchasing a phone for the purpose of traveling or wiping and repurposing your old phone—if it still receives software updates. The device doesn't need to be a true 'burner' phone, in the sense that you will be carrying it with you as if nothing is out of the ordinary, so you don't need to purchase it with cash or take other steps to ensure that it can't be connected to you. The idea, though, is to build a sanitized version of your digital life on the travel phone, ideally with separate communication and social media accounts created specifically for travel. This way, if your device is searched, it won't have the back catalog of data—old text messages, years of photos, forgotten apps, and access to many or all of your digital accounts—that exists on your primary phone and could reveal details of your political views, your associations, or your movements over time. Starting with a clean slate makes it easy to practice 'data minimization,' or reducing the data available to another person: Simply put the things you'll need for a trip on the phone without anything you won't need. You might make a travel email address, some alternate social media accounts, and a separate account for end-to-end encrypted communications using an app like Signal or WhatsApp. Ideally you would totally silo your real digital life from this travel life. But you can also include some of your regular personal apps, building back from the ground up while determining on a selective basis whether you have existing accounts that you feel comfortable potentially exposing. Perhaps, for example, you think that showing a connection to your employer or a community organization could be advantageous in a fraught situation. Privacy and digital rights advocates largely prefer the approach of building a travel device from scratch, but they caution that a phone that is too squeaky clean, too much like a burner phone, can arouse suspicion. 'You have to 'seed' the device. Use the phone for a day or even for a few hours. It just can't be clean clean. That's weird,' says Matt Mitchell, founder of CryptoHarlem, a security and privacy training and advocacy nonprofit. 'My recommendation is to make a finsta for travel, because if they ask you what your profile is, how are you gonna say 'I don't use any social media'? Many people have a few accounts anyway. One ratchet, one wholesome—add one travel.' Cyr, from Amnesty International, also points out that a true burner phone would be a 'dumb' phone, which wouldn't be able to run apps for encrypted communications. 'The advantage that we all have with smartphones is that you can communicate in an encrypted way,' Cyr says. 'People should be conscious that any nonencrypted communication is less secure than a phone call or a message on an application like Signal.' While a travel device doesn't need to use a prepaid SIM card bought with cash, it should not share your normal phone number, since this number is likely linked to most if not all of your key digital accounts. Buy a SIM card for your trip or only use the device on Wi-Fi. Traveling With Your Primary Phone The other approach you can take to protecting your device during border crossings is to modify your primary smartphone before travel. This involves removing old photos and messages and storing them somewhere else, cleaning out nonessential apps, and either removing some apps altogether or logging out of them with your main accounts and logging back in with travel accounts. Mohammed Al-Maskati, digital security helpline director at the rights group Access Now, says that people should consider this type of clean-out before they travel. 'I will look at my device and see what apps I need,' he says. 'If I don't need the app, I just remove it.' Al-Maskati adds that he suggests people particularly remember to remove dating apps and anything related to LGBTQI communities, especially if they consider themselves to be at higher risk of facing a device search. And generally, this approach is only safe if you are particularly diligent about removing every app that might expose you to risk. You could use your own phone as a travel phone by backing it up, wiping it, building a travel device with only the apps you really need while traveling, going on your trip, and then restoring from the backup when you get home. This approach is doable but time consuming, and it creates more opportunities for operational security mistakes or what are known as 'opsec fails.' If you try to delete all of your old, unwanted apps, but miss one, you could end up exposing an old social media account or other historic service that has forgotten data in it. Messaging apps can have easily searchable archives going back years and can automatically save photos and files without you realizing it. And if you back up all of your data to the cloud and take it off your device, but are still logged into the cloud account underpinning other services (like your main Google or Apple account), you could be asked to produce the data from the cloud for inspection. Still, if you assess that you are at low risk of facing scrutiny during a border crossing or you don't have access to an additional device for travel, modifying your main smartphone is a good option. Just be careful. What To Do, If Nothing Else Given all of this, you may be hyped up and ready to throw your phone in the ocean. Or you may be thinking there's no way in hell that you're ever going to take the time to deal with any of this. For those in the latter camp, you've come this far, so don't click away just yet. If you don't want to take the time to make a bunch of changes, and you don't think you're at particular risk during border crossings (though keep in mind that it's possible your risk is higher than you realize), there are still a few easy things you can do to protect your digital privacy that are better than nothing. First, as mentioned above, print a paper boarding pass and any other documents you might need. Even if you don't turn your phone off and stow it in a bag for your entire entry or exit process, you can put it in your pocket and have your paper ticket and other documents ready while actually interacting with agents. And taking basic digital hygiene steps, like updating your phone and removing apps and data you no longer need, can go a long way. 'We all need to be recognizing that authorities may scrutinize your online presence, including social media activity and posts you've published,' says Danacea Vo, founder of Cyberlixir, a cybersecurity provider for nonprofits and vulnerable communities. 'Since people have gotten more vocal on social media, they're very worried about this. Some have even decided not to risk traveling to or from the US this year.'
Yahoo
16-04-2025
- Politics
- Yahoo
How Governments Spy On Protestors—And How To Avoid It
Law enforcement's ability to track and profile political protestors has become increasingly multifaceted and technology driven. In this edition of Incognito Mode WIRED Senior Editor, Security & Investigations Andrew Couts and WIRED Senior Writer Lily Hay Newman discuss the technologies used by law enforcement that put citizens' privacy at risk—and how to avoid for products discussed in this episode of Incognito Mode:Silent Pocket SLNT Faraday Waterproof Backpack: the SLNT Storefront: you buy something through our affiliate links, we earn a commissionDirector: Efrat KashaiDirector of Photography: Brad WickhamEditor: Matthew ColbyHost: Andrew CoutsGuest: Lily NewmanLine Producer: Joseph BuscemiAssociate Producer: Paul GulyasProduction Manager: Peter BrunetteProduction Coordinator: Rhyan LarkCamera Operator: Mar AlfonsoGaffer: Niklas MollerSound Mixer: Sean PaulsenProduction Assistant: Malaia SimmsPost Production Supervisor: Christian OlguinSupervising Editor: Erica DeLeoAssistant Editor: Justin Symonds - Protests, almost by definition, are points of contention between citizens and their governments. [subdued music] Police tracking of protestors is multifaceted and includes a variety of tactics and gear that generate different data. Some surveillance is done at the protests, while other methods are used outside of it. - It's just like all different ways to get at this core thing of who was there, what are they up to, what do they think about things? I think that's sort of how I break it down because so many of these technologies are unseen or not intuitive. - In this episode, we'll discuss the technologies used by law enforcement that put citizens' privacy at risk. This is "Incognito Mode." [moody music] - The movies were way ahead on this, right? Like they were depicting, it's like the yellow box that goes around the face type of thing. Now, that is very real. This technology is more and more available to law enforcement. - Although law enforcement have had access to facial recognition tools for about 20 years, they previously were only able to search government images such as mugshots. This changed in 2018 when many police departments started using Clearview AI, a facial recognition app that allows them to match photos from around the web. Once a photo is uploaded, the app pulls up matches found online along with links to the source of those photos. - [Newsreader[ Clearview says more than 600 law enforcement agencies across the country use this software. - Based on the person's facial geometry, the images are converted by the system into a formula measuring things like eye distance. This means that law enforcement can use any image to search for a person who doesn't currently have a police record and isn't known to authorities, and potentially identify them in seconds. - I wanted to ask you, since you've covered this a lot, how do you view the risk of these platforms as they proliferate? - To be quite frank, it freaks me the hell out. Image recognition is just really, really good now and cheaper to deploy and so you know, I think it's more just kind of accepting that this is just part of life. Like just commuting every day, you're probably being subjected to some of these systems in one form or another. It's not just the systems where you have face rec built in. It can be deployed after the fact if you're in people's pictures that are posted on social media, it can get uploaded to these systems and then you can get picked out of a crowd in that way. - [Rioters] USA! USA! - We saw that with, you know, the January 6th Insurrection videos that were posted to Parler and other social media platforms. - [Newsreader] News tonight, an Auburn man has been found guilty of federal charges for his actions during the January 6th insurrection. - You know, the FBI took those, they saw people in the videos, they went back and and kind of looked to see like, "Okay, here's proof you were there." Governments in 78 countries use public facial recognition systems with varying degrees of support from their citizens. Many countries use the technology without transparent regulations. In Russia, facial recognition tools have been used not only to detain people protesting the war in Ukraine, but also to identify and arrest opponents of the government before they joined any demonstrations. Reuters reported that the facial recognition systems used in Moscow are powered by Western companies including NVIDIA and Intel. Other companies such as Amazon have also launched software that allows users to build a facial recognition database using their own photos. These systems, they're everywhere and things that you might think could kind of thwart these systems, even like wearing a mask and these kinds of things, some of the technologies can get around that. I don't know what to do with that information to be honest. - There are a lot of police here. Are you not frightened? - We are, but you know, we are together. That gives a real power. - I am frightened. Of course I'm frightened. That's why I'm just covering up all my face just so that they cannot even, you know, find my ID, but me being afraid doesn't mean that I'm not going to be here today and fight for my future. - I agree 100% with what you were saying about how masks and other deterrent measures aren't always effective at defeating these identification technologies. But clearly they are at least somewhat effective sometimes because you know, in a lot of crackdowns we've seen in the last few years by multiple governments, like one thing they'll do is try to ban mask wearing in certain settings. Yeah, are there any other things, please tell me that you have more. - Yeah, I mean I think there are ways to minimize the data and thus minimize the risks. Just simple things like not shooting pictures and videos while you're at a protest so you're not capturing yourself and anybody else who's around you is one way to keep it out of some types of systems. Avoiding some systems is better than avoiding no systems. You are going to be subjected to this technology in one way or the other and you just kind of have to proceed as best you can and minimize your contributions to those systems as much as as possible. - CCTVs or security cameras have been ubiquitous for a few decades now. One could have thought 20 or 30 years ago, like, "Well now everything is going to be captured on film all the time." But there are limitations still to just how much data is stored, for how long. You know, there've been a lot of high-profile events around the world in recent years where there wasn't adequate security footage to really know what had happened. It's not like every step you take, someone is paying to run the system and store the data to identify you. [subdued music] - In 2010, "Wired" reported on federal agents friending crime suspects on sites like MySpace in order to see their photos, communications, and personal relationships. More recently, police have used companies like Dataminr to more easily sift through massive amounts of data in order to glean information about how protests are organized, to identify activists, and to piece together people's connections to each other. - So social media accounts, right? It's a lot of data on everyone who's using these platforms. But I kind of think of these surveillance technologies in two buckets. One would be if authorities want to find out more about a specific person, right? What has Andrew been posting about or saying and are there photos you know, of Andrew online? Things like that. But then the other one would be coming at it the flipped where it's like they're looking for anyone who has been talking about X thing, or you know, anyone marking their location in a certain place on a certain day. Authorities can go directly to the sites or they might wanna use a service that kind of pulls a ton of data from social platforms together, you know, aggregates all of it and getting kind of lists of names. It gives the ability to like have this vibe check. Like those platforms themselves aren't inherently a surveillance tool, right? Sometimes we use them for journalism. - I've used some of these services like Dataminr before and once you see just the fire hose of information that you can get access to when you use it, it's becomes clear just how easy it is to kind of figure out what is going on. Even if it's not obvious to you in your own like curated timeline. Just the use of them has become more widespread. You wouldn't know without doing some investigating, "Definitely my local police department is using this or not." That creates an environment where you have to assume that that's what's happening. - Steps like making your account private or setting something to expire quickly. Maybe they can help. But I wouldn't assume those types of settings can really truly protect data on big mainstream platforms. - An example of how social media surveillance was used can be found through the MPD surveillance of the George Floyd protests in 2020. It was found that the MPD collected data about protest events including dates, locations, organizers, and estimated crowd sizes. The MPD shared this information with the Secret Service, National Park Service, and the Department of Defense. - So I think the other huge advice is about data minimization and not posting about things that you worry about getting into other people's hands. There's a tension here with chilling speech, right? The nature of the internet is to share information, right? That's like the whole purpose of the platform. When you put stuff out there, it's hard to say like, "Okay, it's out there but only for certain people," and control it. - Our perspective on it is probably a little bit different because we're journalists, we're kind of in the public eye in a way that some other people aren't, but I think anybody, no matter if you have one follower or a million, you should be really careful about what you post online and when you post it online. You know, if you're gonna post vacation pictures, I never post them while I'm actually on vacation. Because then that signal to somebody like, "Hey, my house is empty." You can apply that to all different types of risks and I think generally posting less is the way to go. - But also some people really wanna post or that's their like job, or you know, that's how they make money. It's just helpful to understand that the greater volume you're posting, the more there could be things you didn't think of that's exposing information that you didn't realize is now out there. [subdued music] - IMSI catchers, also known as cell site simulators and formerly referred to as StingRays, are devices that impersonate cell towers causing cell phones within a certain radius to connect to them. Initially designed for military and national security purposes, this technology has emerged in routine police use. Until recently, the use of IMSI catchers was withheld from the public. The FBI has even forced state and local police agencies to sign NDAs in order to use their devices. I mean, I find IMSI catchers fascinating just in that their use is really secretive, like there was a long time that police weren't allowed to say that they had them or that they were using them, so there's just- - And no one had seen one. - Right. Yeah, exactly. Can you tell us just a little bit about how that works? - These are devices that, at its core, just identify that your phone was physically in a certain location, like that's the baseline thing it's trying to achieve. Sometimes called an IMSI catcher because of this IMSI number that it's trying to pick up. They can work in different ways, they can work passively to just sort of sweep around and say what devices are in the area and let me try to, you know, decrypt their signal and catch that you know, an ID number. More often, they work actively as like a fake cell tower, taking advantage of the way the system works, that your phone is going to connect to the cell tower that's emitting the strongest signal in the area to give you the best service and then grab that ID number. Sometimes they can also potentially grab other stuff like unencrypted communications, like SMS text messages. It's important to know that one of the things that can happen when you bring a phone to an event like a protest is that the fact that you were there and potentially some other information could be sort of pulled out of the air by one of these devices. - Records show that IMSI catchers are used by 23 states and the District of Columbia, the DEA, ICE, FBI, NSA, and DHS, along with many additional agencies. In terms of how people gauge the risk of these, I mean for one thing, like you said, a lot of times they're looking to target one person or maybe a couple of people and it does end up looping in a lot of people just by the nature of how it works. But it's also one that I think is expensive and complicated to deploy and so it's probably not gonna be the top concern. If I were going to a protest, I don't think it's the thing I would be so concerned about, just as an average person. - Another thing in that vein, you know, if this technology that we're talking about is rogue cell towers, it means that actual cell towers also have all this information, right? Like your wireless provider knows where you go. So that data exists anyway and there are potentially other ways that, you know, authorities can get that information. [brooding music] - Geofence warrants, or reverse location warrants, allow law enforcement to request location data from apps or tech companies like Google or Apple for all devices in a specific area during a set time. Authorities can then track locations, identify users and collect additional data like social media accounts. - This is yet another layer in this multiple approaches to getting the same information: who was at a certain place at a certain time and what can we find out about what they were up to? - A lot of it's advertising data or what's being shared all the time from your device that you probably aren't paying much attention to and is used in a much more innocuous way typically. - And it's sort of slurping up all the data from this area, which is constrained in a way but doesn't account for passersby, people, you know, getting coffee at the deli next door, people just sort of coming up to a location to see what's going on. Like this is just bulk indiscriminate data. I am worried about it, but maybe not specifically. Like it's in the category to me of all the reasons that I might consider leaving a device at home or putting it in a Faraday bag. It's sort of just on that list of reasons that you might wanna minimize the data that your device is emitting. [subdued music] - Data brokers collect and sell personal data from public sources, websites, and apps people use every day. They aggregate all this info to build detailed profiles of people and to group them into simplified categories such as high income, new moms, pet owners, impulse buyers, and more. While advertisers are usually their primary clients, police can also purchase this data. Some of the largest data broker companies include Experian, Acxiom, and Equifax. The amount of data Equifax collected came to light in 2017 when a data breach exposed 147 million people's personal data. - I think it just fuels this ability to identify someone and track kind of their behavior across the web and potentially their speech. Similar to the way law enforcement can track people and surveil people through social media platforms, information from data brokers can aid investigations in two ways. They can be coming at it from a person of interest who they're trying to find out more about or authorities can be coming at it from, "I want information on anyone who has had an IP address in this area or anyone who has keyword searched, you know, and been shown these types of ads." - So how do data brokers collect information? The most common ways include web browsing history, everything from your Google searches, sites or apps you visit, cookies, social media activity, or even a quiz you just filled out for fun. All of that can be scraped and tracked. This data creates each person's online history map, which in turn allows brokers to build a profile on each user. The data that companies collect often include: name, address, phone number and email address, date of birth, gender, marital and family status, social security number, education, profession, income level, cars and real estate you own. It also comes from public sources. This can be anything in the public domain such as: birth certificates, drivers or marriage licenses, court or bankruptcy records, DMV records and voter registration information. It can also include commercial sources such as: your purchase history, loyalty cards, coupon use, and so forth. And finally, some websites or programs will ask for your consent to share your data. Sometimes it's anonymized in certain ways, especially when it comes to advertising data, but it's pretty trivial for law enforcement or other investigators to tie certain advertising behavior to a specific device, especially if it's collecting precise location data and there's also data brokers that are building network profiles so you can not just get information about yourself, but everybody you've interacted with, whether it's on social media or actually in real life. In the United States at least, we just lack laws that kind of regulate what these companies are able to collect. And if you have to participate in modern society, as nearly everyone does, it's almost impossible to avoid. I think in the context of protests, it's not an acute concern I would say, but it is generally speaking really freaky when the sky's the limit on what they could potentially use because there's just so much data. - I agree with what you said, sort of low on the acute scale, but high on the existential scale. [subdued music] - One of the big surveillance technologies that probably everyone who's driven on a highway knows about is license plate readers. Really just capturing what your license plate is and showing that your vehicle was at a certain place at a certain time. - Similar to like your phone, your car, it's a proxy for you. Maybe you were in the car, maybe you weren't, but that's where your car went. - There are three types of ALPR systems: stationary or fixed ALPR cameras, which are installed in a fixed location like a traffic light, telephone pole or a freeway exit ramp. The second type are mobile ALPR cameras, which are attached to police patrol cars, garbage trucks, and other vehicles, and allow them to capture data from license plates as they drive around the city. They can also assist law enforcement in gridding, which is when police officers drive up and down a neighborhood collecting license plates of all parked cars. There are also private vendors like Vigilant Solutions, which collect license plate data and sell that back to police. The third type are ALPR trailers, which are trailers police can tow to a particular area and leave for extended periods of time. It's been reported that the DE has disguised ALPR trailers as speed enforcement vehicles and placed them along the US-Mexico border. The things I'm concerned about aren't necessarily even it being used for license plates. Our colleague, Dhruv Mehrotra has done some reporting showing that license plates readers can also capture any words that are visible, so that can be what's on your t-shirt, that could be political signs in your yard. This technology may be able to be used in ways that we're not even familiar with or would imagine. You know, a lot of times when we're talking about any surveillance technologies, it's really about creating data that then is there and could potentially be used in any number of ways at any point in the future depending on who gets access to it and what they want to do with it. [moody music] - The key thing here is that these drones, even small quadcopters, like what we think of as consumer drones, they can carry a fair amount of cargo, meaning like cameras. - There are a number of different drones used by law enforcement varying in size and ability. For example, some drones have thermal imaging capabilities for night operations while others specialize in long periods of surveillance. Protestors have in the past reported drones flying overhead, for example in Minneapolis during the George Floyd protests. Police and government drones usually fly in the range of 11,200 feet above the ground. However, it's been reported that the drone used to surveil protests in Minneapolis in 2020 flew at 20,000 feet, nearly invisible to protestors on the ground. This was a Customs and Border Protection drone, which are often equipped with advanced cameras, radar, and potential cell phone geolocation tools. In terms of how freaked out are you about drones, how do you think about that? - Yeah, I would say fairly freaked out. But again, like you were saying about the layering of these technologies, I think it's not the drones themselves, it's everything they can do and how cheap they are and how easy it would be to deploy even more of this tech. When we talk about sort of evolution of different technologies, this capability is sort of similar to police helicopters and now it's just cheaper, lighter, easier. Even these sort of benign-seeming quadcopters that we see around all the time could be carrying equipment on them to do like very granular, detailed surveillance of something like a protest. [subdued music] - There are some technologies that are really just emerging and we don't even know if they've been used at protests or even used by authorities in the United States. - Right, and your face isn't the only thing sort of outside your body that can potentially identify you. For example, analyzing your gait, like how you walk. - Gait recognition technology can identify individuals by analyzing their unique walking patterns using machine learning. It captures movements through cameras, motion sensors, or even radar. It then processes this information, breaking it down into contours, silhouettes, and other distinguishing features. It offers high accuracy, but its effectiveness can be influenced by things like injuries or the types of terrain the subject is traversing. This tech is especially useful for authorities when people's faces are obscured. While there haven't been any reports of widespread use of this tech by law enforcement agencies in the US, Chinese authorities have been utilizing it on the streets of Shanghai and Beijing since at least 2018. In recent years, there have also been a number of companies working on creating emotional detection technology where AI uses biometric data to determine a person's emotional state and the likelihood they will become violent or cause a disturbance. "Wired" reporting found that Amazon-powered cameras have been scanning passengers faces in eight train stations in the UK to trial this new technology. The trials were testing the system for age and gender recognition as well as the emotional state of the person on camera. While there's no current documentation of this tech being used at protests, the BBC reported that emotional-detection tech has been used on Uyghurs in China. - Some of these could be really invasive because you know, reading your emotions, there start to be maybe inferences that someone could make about how you were feeling in a certain moment that may or may not be accurate, right? Because it's sort of being taken out of context. So it's difficult to have an algorithm just sort of come to one conclusion. Like sometimes I think you're doing your angry walk coming over when I haven't filed my story, but really then you're really nice about it and you're like, "It's okay Lily, you can do it." And you know, I took it totally the wrong way. But potentially there are more sort of in terms of just identifying someone in a certain place. It is scary that there's something characteristic about your walk. They're not saying, "Oh, it's Andrew's angry walk," but they're saying, "Oh, that's Andrew." - Certainly creating more systems that are replicating what other things like facial recognition do and applying it in to other biometrics of a person. That definitely is gonna create all the same concerns as we've seen with these other technologies that were emerging, you know, years or decades ago. But now it's your entire body, how you walk, and like you mentioned, like if we're having computers analyze like how I'm feeling in a certain moment, effectively establishing intent of whatever my actions are in that moment, that gets really scary because it might be completely inaccurate. Every time there's one of these new AI technologies, there's always some bias built in. There are gonna be people who suffer consequences unnecessarily because these systems are deployed without being fully debugged. Experts in the AI field have previously noted that emotional-detection tech is unreliable, immature, and some even call for the technology to be banned altogether. [subdued music] Here are a few simple and effective ways to protect yourself and your personal information at a protest. First, if you can, leave your phone at home, I know this might sound drastic, but the most effective way to ensure that your personal data isn't compromised and that your phone won't fall in the hands of law enforcement is by not having it with you. If that's not an option, you can put your phone in a Faraday bag so data can't be accessed. You should also turn off biometrics on your like facial recognition or fingerprint scanner, meaning you'll need a code to access it. That way your face or fingerprints can't be forcefully used to access your personal information. You can always say, "You just don't remember the code. Don't unlock it." Another thing to keep in mind is posting on social media. Jay Stanley, a senior policy analyst at the ACLU says, "if you post something online, you should do so under the assumption that it might be viewed by law enforcement." You should always check your sharing settings and make sure you know what posts are public. Try to minimize the amount of other people's faces you capture in your photos or videos, use end-to-end encrypted messaging services like Signal when possible, wear a mask in case photos or videos are taken, and finally, know your personal risks. Is your immigration status exposing you to additional dangers? Are you part of a minority group that is more likely to be targeted by law enforcement? Keeping these things in mind for yourself and your loved ones when deciding if you should go out to a protest. For more information about surveillance at protests, check out This was "Incognito Mode." Until next time. [otherwordly music]


WIRED
05-03-2025
- WIRED
1 Million Third-Party Android Devices Have a Secret Backdoor for Scammers
Lily Hay Newman Matt Burgess Mar 5, 2025 6:00 AM New research shows at least a million inexpensive Android devices—from TV streaming boxes to car infotainment systems—are compromised to allow bad actors to commit ad fraud and other cybercrime. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Cheap TV streaming boxes seem like one of the most straightforward gadgets out there, but they can come with hidden costs. In 2023, researchers revealed that tens of thousands of Android TV boxes being used in homes, schools, and businesses were equipped with secret backdoors that allowed them to be used in a host of cybercrime and online fraud. Now, the same researchers have found that the China-based ecosystem behind the compromised devices and the illicit activities they're used for—collectively dubbed Badbox 2.0—is fueling a next-generation campaign that's broader in scope and even more sneaky. At least 1 million Android-based TV streaming boxes, tablets, projectors, and after-sale car infotainment systems are infected with malware that conscripts them into a scammer-controlled botnet, according to new research shared exclusively with WIRED by the cybersecurity firm Human Security. The compromised devices are used for a range of advertising fraud and in so-called residential proxy services, which allow their operators to use victim internet connections for routing and masking web traffic. And all of this activity happens behind the scenes without the owners of compromised devices having any idea of how their streaming boxes are being used. 'This is all completely unbeknownst to the poor users that have bought this device just to watch Netflix or whatever,' Gavin Reid, Human's chief information security officer, tells WIRED. 'Ad fraud including click fraud is all happening behind the scenes, but the main way they are monetizing the million devices is reselling this proxy service. Victims don't know that they're a proxy, they never agreed to be a proxy service, but they're being used for that. Any bad thing you want to do, scraping, whatever it is, these proxy services are an enabler for that.' The researchers found that the majority of infected devices are in South America, particularly Brazil. The impacted devices often use generic names and aren't produced by known brands. For example, there are dozens of impacted streaming boxes, but the majority of Badbox 2.0 targets are in the 'TV98' and 'X96' device families. Virtually all of the targeted devices are designed using Android's open source operating system code, meaning they run versions of Android but aren't part of Google's ecosystem of protected devices. Google collaborated with the researchers to address the ad fraud component of the activity, though. The company says it worked to terminate publisher accounts associated with the scams and block the ability of those accounts to generate revenue through Google's advertising ecosystem. 'Malicious attacks like the one described in this report are expressly prohibited on our platforms,' Google spokesperson Nate Funkhouser told WIRED in a statement. 'Bad actors' tactics are constantly evolving. Partnering with organizations like HUMAN helps us share threat intelligence and expands our collective ability to identify and take swift action against bad actors, as we did here.' In the original Badbox campaign, scammers focused on installing backdoored firmware in streaming boxes before they arrived in the hands of consumers. The Badbox 2.0 campaign is significant, the researchers say, because it reflects a major change in tactics. Rather than focusing on low-level firmware infections, Badbox 2.0 involves more traditional software-level malware distributed through common tactics like drive-by downloads, in which victims accidentally download malware without realizing it. Researchers from multiple firms say that the campaign seems to come from a loosely connected ecosystem of fraud groups rather than one single actor. Each group has its own versions of the Badbox 2.0 backdoor and malware modules and distributes the software in a variety of ways. In some cases, malicious apps come preinstalled on compromised devices, but in many examples that the researchers tracked, attackers are tricking users into unknowingly installing compromised apps. The researchers highlight a technique in which the scammers create a benign app—say, a game—post it in Google's Play Store to show that it's been vetted, but then trick users into downloading nearly identical versions of the app that are not hosted in official app stores and are malicious. Such 'evil twin' apps showed up at least 24 times, the researchers say, allowing the attackers to run ad fraud in the Google Play versions of their apps, and distribute malware in their imposter apps. Human also found that the scammers distributed over 200 compromised, re-bundled versions of popular, mainstream apps as yet another way of spreading their backdoors. 'We saw four different types of fraud modules—two ad fraud ones, one fake click one, and then the residential proxy network one—but it's extensible,' says Lindsay Kaye, Human's vice president of threat intelligence. 'So you can imagine how, if time had gone on and they were able to develop more modules, maybe forge more relationships, there is the opportunity to have additional ones.' Researchers from the security firm Trend Micro collaborated with Human on the Badbox 2.0 investigation, particularly focusing on the actors behind the activity. 'The scale of the operation is huge,' says Fyodor Yarochkin, a Trend Micro senior threat researcher. He added that while there are 'easily up to a million devices online' for any of the groups, 'This is only a number of devices that are currently connected to their platform. If you count all the devices that would probably have their payload, it probably would be exceeding a few millions.' Yarochkin adds that many of the groups involved in the campaigns seem to have some connection to Chinese gray market advertising and marketing firms. More than a decade ago, Yarochkin explains, there were multiple legal cases in China in which companies had installed 'silent' plugins on devices and used them for a diverse array of seemingly fraudulent activity. 'The companies that basically survived that age of 2015 were the companies who adapted,' Yarochkin says. He notes that his investigations have now identified multiple 'business entities' in China which appear to be linked back to some of the groups involved in Badbox 2. The connections include both economic and technical links. 'We identified their addresses, we've seen some pictures of their offices, they have accounts of some employees on LinkedIn,' he says. Human, Trend Micro, and Google also collaborated with the internet security group Shadow Server to neuter as much Badbox 2.0 infrastructure as possible by sinkholing the botnet so it essentially sends its traffic and requests for instructions into a void. But the researchers caution that after scammers pivoted following revelations about the original Badbox scheme, it's unlikely that exposing Badbox 2.0 will permanently end the activity. 'As a consumer, you should keep in mind that if the device is too cheap to be true, you should be prepared that there might be some additional surprises hidden in the device,' Trend Micro's Yarochkin says. 'There is no free cheese unless the cheese is in a mousetrap.'