Latest news with #JeramieScott


Daily Mail
20-05-2025
- Daily Mail
Urgent warning to Americans over 'dangerous' technology quietly rolled out in 80 airports
You look up. A camera captures your face. Within seconds, you've been scanned, stored, and tracked—before even reaching airport security. Without ever handing over your ID, the Transportation Security Administration (TSA) already knows exactly who you are. This is happening at 84 airports across the US. And chances are, you didn't even notice. Marketed as a tool to enhance security, TSA's facial recognition system is drawing criticism for its potential to track Americans from the terminal entrance to their final destination. While the agency insists the scans are voluntary, many passengers say they're unaware that opting out is even an option. Behind these sleek, touchless scanners lies a vast biometric database, which is raising concerns among experts about how the data might be used, or misused, by the very government that collects it. Jeramie Scott, senior counsel at the Electronic Privacy Information Center, described the facial recognition as 'an invasive and dangerous surveillance technology.' 'That will ultimately accelerate the use of our faces as our ID, and that has some very important implications for privacy, civil liberties, civil rights and our democracy,' he added. TSA's facial recognition program, known as Credential Authentication Technology with Camera (CAT-2), scans a traveler's face in real time and matches it to the photo on their government-issued ID. Once verified, passengers can proceed without ever showing their physical license. The technology is already in use at major US airports, including Los Angeles International, Atlanta Hartsfield-Jackson, and Chicago O'Hare. But this is only the beginning. TSA plans to expand the program to more than 400 airports in the coming years. Gerardo Spero, TSA's Federal Security Director for Pennsylvania and Delaware, said: 'Identity verification of every traveler prior to flying is a key step in the security screening process. 'This technology enhances our ability to detect fraudulent IDs such as driver's licenses and passports at checkpoints, and it increases efficiency by automatically verifying a passenger's identity. We just want to ensure that you are who you say you are.' However, the initiative has drawn significant criticism from privacy experts, civil rights groups, and lawmakers across the political spectrum. Behind these sleek, touchless scanners lies a vast biometric database, which is raising concerns among experts about how the data might be used, or misused, by the very government that collects it One concern is the lack of comprehensive federal regulations governing the use of facial recognition technology. Speaking to HuffPost, Scott warned that 'what may be the safeguards today does not mean they will be the safeguards tomorrow.' According to the TSA, passengers who are uncomfortable with the facial recognition system can choose to opt out and request a manual ID check. Saira Hussain, a senior staff attorney at the Electronic Frontier Foundation, regularly declines the facial scan when she flies. But she says TSA agents often respond with statements like, 'We already have your information, so it's not like you're giving us anything more.' TSA's long-term vision, outlined in its technology roadmap, is to create a fully 'touchless' and 'frictionless' airport experience, transforming your body into your boarding pass. But what's marketed as convenience by the agency is viewed as surveillance by others. Travis LeBlanc, a lawyer and former member of the Privacy and Civil Liberties Oversight Board, warns that TSA's placement within the Department of Homeland Security raises serious concerns about the use of traveler data. 'TSA is part of the Department of Homeland Security, which is also responsible for immigration enforcement,' LeBlanc said. 'There are many potential uses for these images.' In January, the Trump administration removed LeBlanc and two other Democratic board members. He is currently suing the U.S. government to be reinstated, arguing that the dismissal was unlawful. Despite the criticism, TSA remains committed to expanding the program. In a recent statement, the agency said biometric screening will 'improve security effectiveness' and 'enhance the passenger experience.' For now, opting out remains an option. But as summer travel ramps up and facial recognition systems roll out to more airports, passengers are left to weigh the trade-off between speed and privacy.


Forbes
22-03-2025
- Business
- Forbes
390 Million Faces: Clearview AI's Secret $750,000 Attempt To Buy Your Mugshot
Output of a facial recognition system. Facial recognition firm Clearview AI took steps to dramatically expand its surveillance capabilities by attempting to purchase hundreds of millions of arrest records containing sensitive personal information, including social security numbers and mugshots, according to documents reviewed by 404 Media. The controversial company, already notorious for amassing over 50 billion facial images scraped from social media platforms, signed a contract in mid-2019 with Investigative Consultant, Inc. to acquire roughly 690 million arrest records and 390 million arrest photos from across all 50 U.S. states. "The contract shows that Clearview was trying to get social security numbers, email addresses, home addresses, and other personal information along with the mugshots," said Jeramie Scott, Senior Counsel at the Electronic Privacy Information Center, or EPIC. The ambitious data grab ultimately fell apart, spiraling into legal battles between the two firms. Clearview shelled out $750,000 for an initial data delivery but declared it "unusable," triggering mutual breach of contract claims. Despite an arbitrator ruling in Clearview's favor in December 2023, the company hasn't recouped its investment and now seeks a court order to enforce the arbitration award. Privacy watchdogs warn about the troubling implications of merging facial recognition technology with criminal justice data. Scott pointed out that linking individuals to mugshots and related information can fuel bias among human reviewers using the system. "This is especially concerning given that Black and brown people are overrepresented in the criminal legal system," Scott emphasized. Facial recognition systems have repeatedly come under fire for their well-documented failures when identifying people with darker skin tones. The consequences have been severe. Multiple cases across America have seen innocent individuals wrongfully arrested based on faulty identifications from facial recognition technology. As a digital forensics expert, I have seen facial recognition technology fail firsthand with real consequences. I was retained on a criminal defense case where authorities accused the defendant of using a rental truck to commit a felony. Their entire case hinged on a single facial recognition match from surveillance footage. In my investigation, I uncovered irrefutable evidence of innocence. Cell phone data placed the defendant miles from the crime scene during the critical timeframe. The technology that triggered his arrest had completely misidentified him. This wasn't merely a technical glitch but a life-altering ordeal for someone who faced serious criminal charges based on algorithms that proved unreliable. What's particularly troubling is how quickly investigators accepted the facial recognition result without pursuing basic corroborating evidence that would have immediately cleared the defendant. Cases like this reveal the dangerous over-reliance on surveillance technologies within our criminal justice system. When companies like Clearview pursue even larger databases of personal information, they risk amplifying these failures at a scale that could affect innocent people. Clearview AI faces an intensifying barrage of legal obstacles worldwide. The firm recently celebrated a victory against a £7.5 million fine from the UK's Information Commissioner's Office, or ICO, successfully arguing it fell outside UK jurisdiction. Yet this represents merely one skirmish in a broader regulatory battlefield. International regulators have slapped Clearview with multi-million dollar penalties for privacy violations, while the company just received final approval for a settlement that forced Clearview to surrender nearly a quarter of its ownership over alleged violations of biometric privacy laws. Clearview AI's business model revolves around selling access to its facial recognition technology, primarily targeting law enforcement agencies. The company boasts that its technology has helped crack cases ranging from homicides to sophisticated financial fraud. While competitors like NEC and Idemia have built their market presence through conventional business development channels, Clearview stands apart, and draws particular scrutiny. This is because of its aggressive approach of scraping billions of images from social media platforms without obtaining consent. The revelation about Clearview's attempted acquisition of sensitive personal data arrives as the facial recognition industry faces mounting pressure for regulation and transparency. As this powerful technology increasingly permeates law enforcement and private security operations, fundamental questions about privacy, consent and algorithmic bias continue to dominate public discourse. Note: The case examples described are based on real events, but names, dates, locations, and specific details have been altered to protect client confidentiality while preserving the essential legal principles and investigative techniques. 404 Media report that, 'ICI and Clearview did not respond to multiple requests for comment.' I have also requested comment. This article will be updated accordingly when and if I receive a response.


The Intercept
18-03-2025
- Politics
- The Intercept
DEA Insiders Warned About Legality of Phone Tracking Program. Their Concerns Were Kept Secret.
When the Drug Enforcement Administration's access to a secret trove of billions of American phone records was exposed in 2013, the Obama administration said the data had been collected under a perfectly legal program. Civil liberties advocates, however, were not convinced about that the data collection program — which let the DEA see who you called, and who they called too — was aboveboard. Now, the advocates are learning more than a decade later that they had a clutch of surprising allies: DEA officials on the inside — whose internal alarms were kept secret. Watchdog findings released last week show that government officials had privately raised questions about the program for years — including a high-ranking DEA agent who expressed 'major' concerns. The FBI even halted its own agents' access to the database for months. The DEA's 'Hemisphere' project went ahead despite the apprehensions — and continues to this day. With new details about the program coming to light, the civil liberties advocates in Washington, including those in Congress are again raising their concerns. One watchdog group said the latest revelations show that the program was flawed from the beginning. 'There should have been no question from the very start that this program needed a proper legal analysis, to determine whether there was the authority for the government to obtain this type of information in bulk through administrative subpoenas,' said Jeramie Scott, a senior counsel at the Electronic Privacy Information Center. 'It's a real failure of oversight and accountability that years went by without a proper legal analysis.' When the DEA's program was made public, it immediately drew comparisons to the National Security Agency's domestic phone call database revealed by Edward Snowden. The key details of the DEA program were shocking to civil liberties advocates: AT&T had made billions of phone call records available to the agency and other law enforcement agencies in exchange for payment. Those records did not include the content of calls, but they did include metadata information on the time, and information on the number called, according to the Electronic Frontier Foundation. The data extended far beyond AT&T's own customers, since most calls pass through AT&T's switches at some point. The 'Hemisphere' project could provide call data not just about who a target was communicating with, but also so called 'two-hop' data on who that second person was in phone contact with as well. Authorities could request the call records by sending a request to AT&T — without a court order required — and the company asked the government to keep the program secret. The DEA even sought to cover up the program's existence by sending traditional subpoenas later on in cases headed for court, a process known as 'parallel construction.' The program is also administered by regional anti-drug offices using money provided by the White House Office of National Drug Control Policy, a convoluted structure that Sen. Ron Wyden, D-Ore., said in 2023 has allowed it to skip a mandatory federal privacy review. When the program was revealed by the New York Times in 2013, the Justice Department downplayed civil liberties concerns. It argued that the program was no different from the long-standing practice of subpoenaing individual phone providers. Critics, though, said the program had vast differences. 'Hemisphere' produced information in hours instead of months; it included 'two-hop' data about the people who had interacted with a target phone number's calling partners; and it could provide analysis in response to a request for 'advanced' information. The 'advanced' products from AT&T appear to have involved the ability to uncover location data on cellphones, and to identify possible replacement phone numbers for so-called drop or burner phones, according to Electronic Privacy Information Center's Scott. Scott, whose nonprofit sued the government for records on the program, said the search for drop phones likely involved analysis on AT&T's part, taking it for legal purposes a far step beyond the typical 'business records' that can be obtained by administrative subpoenas. The 'advanced' searches in particular appear to have raised internal concerns, according to portions of a 2019 report from the Justice Department's Office of Inspector General that were just made public last week. The Justice Department released the new version of that report six years after its original publication, after prodding from Wyden and Rep. Andy Biggs, R-Ariz. The new version shows that legal questions were raised about the 'Hemisphere' program at least four times. In 2007, the same year it started, a DEA supervisor asked the agency's Office of Chief Counsel for 'assurance' that the program had legal approval. The legal office started reviewing the program, sending back to agents a request for more information on the 'geographic' data it produced. The legal analysis, however, petered out without reaching a conclusion, according to the newly revealed portions of the inspector general report. There was 'no evidence,' the report said, that the DEA's lawyers 'substantively addressed the issue raised in the memorandum at a later date.' In February 2008, a DEA special agent in charge expressed 'major concerns' about the way the program was being used in an email to senior DEA officials. That email did produce a formal memorandum approved by agency lawyers with a data request protocol, but the memorandum was never distributed to DEA employees in the field. In August 2010, the FBI's top lawyer contacted the DEA with concerns about the 'Hemisphere' program. What the FBI discovered apparently alarmed it enough to completely suspend use of the program later that month. Discussions over the legality of the program's 'advanced' product continued for months, drawing in other agencies that employed the phone database including the Justice Department's Bureau of Alcohol, Tobacco, Firearms, and Explosives as well as the Department of Homeland Security. The FBI eventually reinstated its agents' access but limited the kind of information they could request. The exact nature of that self-imposed limit remains redacted in the latest version of the inspector general report. Scott said it was notable that the FBI curbed its agents' access to certain analyses when other agencies such as the DEA plowed ahead. 'The DEA had less qualms about using advanced products that the FBI seemed to think were legally questionable,' he said. From September 2012 to January 2013, one of the DEA's in-house lawyers conducted a draft analysis of the 'Hemisphere' program that concluded it was on solid legal footing. Yet this analysis was never finalized or distributed, the inspector general report says. While the revelation of the DEA program in September 2013 caused widespread alarm among civil liberties advocates, it never spurred meaningful restrictions. Instead, as Wyden detailed in a November 2023 letter to then-Attorney General Merrick Garland, the program continued after fits and starts 'under a new generic sounding program name, 'Data Analytical Services.'' By releasing the unredacted portions of the report, the Trump administration appears to have taken a step forward on transparency, but it is unclear whether it will follow through with reforms. (The White House did not respond to a request for comment.) In Congress, Wyden, Biggs, and other members have for years pushed a government surveillance reform act that would tackle a wide range of concerns. Among other policy changes, it would require regular inspector general reports on 'Hemisphere.'