logo
#

Latest news with #404Media

Illinois Cops Gave ICE Access to More Than 5,000 Surveillance Cameras Nationwide
Illinois Cops Gave ICE Access to More Than 5,000 Surveillance Cameras Nationwide

Yahoo

timea day ago

  • Politics
  • Yahoo

Illinois Cops Gave ICE Access to More Than 5,000 Surveillance Cameras Nationwide

A public records request from the Danville, Illinois, police department obtained by 404 Media revealed that local and state police around the country searched automatic license plate reader (ALPR) camera data more than 4,000 times for immigration related reasons between June 1, 2024, and May 5, 2025. Despite an Illinois law prohibiting the use of ALPR data for immigration enforcement, these records suggest that Immigration and Customs Enforcement (ICE) has informally gained access to camera networks across more than 5,000 communities nationwide. The technology provider in question, Flock Safety, has over 40,000 ALPR cameras across the United States that use artificial intelligence to constantly record the plates, color, and make of passing vehicles. The cameras can also register unique features of vehicles like missing hubcaps. The data is kept for 30 days and can be searched to find a vehicle, and by extension, a person. Flock markets its services as a solution for combating crimes like carjackings or finding missing persons. The company claims to help solve an average of over 2,200 crimes per week using its technology. According to the Flock Safety User Guide, once a law enforcement agency, like the Danville Police Department, joins the Flock family, officers can not only search data from its specific community's cameras but also access the data of users across the nation "who are also opted into the National Lookup" capability. But it's not just police departments making up the Flock network. Major companies, neighborhood homeowner associations, and affordable housing communities also use the technology, and can share data. But this expansive system is now capable of tracking drivers' movements and patterns, and providing real-time alerts of a license plate's location across the U.S.—typically without a warrant or court order. Without proper oversight, this network amounts to mass government surveillance of American citizens. Currently, the warrantless search of ALPR data by local law enforcement is being challenged as unconstitutional by the Institute for Justice for violating the Fourth Amendment's protections against unreasonable searches and seizures, and warrantless government surveillance. And now there may be even more concern for Americans' privacy as evidence suggests that ICE, which does not have a formal contract with Flock, has gained side-door access to local and state law enforcement's ALPR camera data. When accessing the Flock's system data, officers must note a "reason" behind each search. Records revealed 4,000 searches with "ICE," "ICE+ERO" (ICE's Enforcement and Removal Operations), "illegal immigration," "ICE WARRANT," or more broadly "HSI" (for Homeland Security Investigations) listed as the reason. After speaking with 404 Media, Illinois officers said that some of the searches in question were for informally assisting or providing a favor to federal agents. One police chief told 404 Media that he did not know what a search performed by a department's officer embedded in the Drug Enforcement Administration was for—even though the "reason" read "immigration violation." Danville's mayor, Rickey Williams Jr., denied that the records indicate Danville police searching data or acting at the behest of another agency, including ICE. "As required by the State of Illinois we ensure that we will not use LPR data or enforce a law or relate a person's immigration status," Williams Jr. told 404 Media. Illinois prohibits police from sharing ALPR data for immigration enforcement. Some other states and law enforcement agencies have also codified similar restrictions. Regardless, Flock's very own policies prohibit its cameras from being used for immigration enforcement. But as the search records and 404's investigation show, searches of Flock's ALPR data are, in fact, being informally shared with federal agencies for immigration purposes. Importantly, this is happening without a public discussion on what data federal agencies should and should not be able to access. "I assume there's a fair number of community residents who accept giving police the power to deploy license plate readers to catch a bank robber, who would absolutely gag on the idea that their community's cameras have become part of a nationwide ICE surveillance infrastructure," Jay Stanley, senior policy analyst at the American Civil Liberties Union's Speech, Privacy, and Technology Project, told 404 Media. "And yet if this kind of informal backdoor access to surveillance devices is allowed, then there's functionally no limits to what systems ICE can tap into with no public oversight or control into what they are tapping into." The post Illinois Cops Gave ICE Access to More Than 5,000 Surveillance Cameras Nationwide appeared first on

The CIA Created a Fake STAR WARS Fan Site As Cover for a Spy Network in the 2000s
The CIA Created a Fake STAR WARS Fan Site As Cover for a Spy Network in the 2000s

Yahoo

time2 days ago

  • Entertainment
  • Yahoo

The CIA Created a Fake STAR WARS Fan Site As Cover for a Spy Network in the 2000s

Spies and spy networks are no strangers to the Star Wars galaxy. Andor was all about that, and we've had references to 'rebel spies' as far back as the original trilogy. But did you know that Star Wars was used as a cover for actual real-world spies by the CIA? We learned via 404 Media that security researcher Ciro Santilli discovered that, back in the mid-to-late-2000s, the CIA created a fake Star Wars fan site. A Star Wars site they secretly used to communicate with informants in other countries. The name of this particular site was If you click on that link now, it takes you directly to the official CIA website. So we'd say his research was accurate. Of course, the site is long gone, but screenshots exist of it. And it is very much a time capsule of that era of the internet. The agents who designed it sure did their Star Wars homework. The screenshot shows a little boy in Jedi robes, with links to several Star Wars websites. Interestingly, many of those sites still exist today. There are some images of animated Clone Wars-era Yoda, which means the CIA was using this fake Star Wars site at least until 2010, not long after the show debuted on Cartoon Network. Here's where things get dark, however. Aside from this one Star Wars site, there were other pop culture and gaming websites used by the CIA in this manner. Eventually, Iranian authorities discovered these sites, and we now know of their link to the killing of several CIA sources in China, circa 2010-2012. Which is about the time that went away. Luckily, thanks to sites like the Wayback Machine, we have evidence of its existence. In Return of the Jedi, Mon Mothma sadly says 'Many Bothans died to bring us this information,' in a reference to her spy network. In reality, it seems many actual spies died using Star Wars as a cover. It's a truly tragic example of 'Art imitates life, life imitates art.'

More than 2 years after ChatGPT, newsrooms still struggle with AI's shortcomings
More than 2 years after ChatGPT, newsrooms still struggle with AI's shortcomings

Yahoo

time2 days ago

  • Business
  • Yahoo

More than 2 years after ChatGPT, newsrooms still struggle with AI's shortcomings

An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned
Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

CNN

time2 days ago

  • Business
  • CNN

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' Get Reliable Sources newsletter Sign up here to receive Reliable Sources with Brian Stelter in your inbox. OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned
Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

CNN

time2 days ago

  • Business
  • CNN

Some newsrooms still struggle with the gap between capability and accountability where AI is concerned

An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' Get Reliable Sources newsletter Sign up here to receive Reliable Sources with Brian Stelter in your inbox. OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store