logo
Feds respond to Virginia lawmakers' privacy concerns about airports' facial recognition screening

Feds respond to Virginia lawmakers' privacy concerns about airports' facial recognition screening

Yahoo10-06-2025
A Credential Authentication Technology unit. (Photo courtesy of the Transportation Security Administration)
Identification screening technology that launched during the pandemic to reduce airport check-in times has helped improve security and travelers' experiences, but carries a privacy concern stemming from the capture and storage of passengers' photos.
Virginia lawmakers, privacy advocates and others have questioned how images of citizens are used and stored, and how peoples' rights are being protected.
The Transportation Security Administration said in a Thursday announcement that the images captured on the airport screening technology known as Credential Authentication Technology (CAT-2) 'are never stored' or 'used for any other purpose than immediate identification.'
House Communication, Technology and Innovation Committee Chair Cliff Hayes, D-Chesapeake, said tools like CAT-2 are helpful, but safeguards are important to protect civil liberties and privacy as the technology advances.
'Travelers deserve both speed and security when it comes to travel, but it's not at the expense of their privacy,' Hayes told the Mercury.
CAT-2 was first tested at Ronald Reagan Washington Airport in August 2020. Since then, TSA said it has been using the cameras carefully in conjunction with the research from the National Institute of Standards and Technology standards.
The administration said CAT-2 allows TSA officers to scan a traveler's photo identification card and capture a real-time photo of them, too. Once the images of the traveler are confirmed, that person is able to proceed for further screening. CAT-2 also includes a plexiglass shield to minimize contact between TSA officers and travelers, and uses a secure Internet connection allowing the units to verify that an individual is ticketed for air travel, eliminating the need to show a boarding pass.
Passengers who do not wish to participate in the facial matching process can opt out in favor of an alternative identity verification process, TSA said. The CAT-2 screens are being updated with language that notifies travelers they may decline having their photo taken.
Hayes said after the technology was piloted in Virginia, he visited multiple airports including Dulles and Norfolk International, and declined to use the technology.
To TSA's credit, Hayes said, he had no problem being screened. Officers identified him manually using his identification card and boarding pass.
Hayes said he hopes the procedures and policy governing the photos will be codified to ensure travelers' privacy remains a priority and that their photos won't be stored or used beyond screenings.
'What's to prevent that from changing and then all of a sudden, a different administration or depending who's overseeing it, they could change that policy,' Hayes said. 'It needs to be codified, I believe, and absent the lack of any federal law, I think the states need to begin to address that, just as we did with the Consumer Data Protection Act.'
The Act provides residents certain rights that shield personal data collection by businesses.
SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Can Writing Math Proofs Teach AI to Reason Like Humans?
Can Writing Math Proofs Teach AI to Reason Like Humans?

Scientific American

timean hour ago

  • Scientific American

Can Writing Math Proofs Teach AI to Reason Like Humans?

A few months before the 2025 International Mathematical Olympiad (IMO) in July, a three-person team at OpenAI made a long bet that they could use the competition's brutally tough problems to train an artificial intelligence model to think on its own for hours so that it was capable of writing math proofs. Their goal wasn't simply to create an AI that could do complex math but one that could evaluate ambiguity and nuance—skills AIs will need if they are to someday take on many challenging real-world tasks. In fact, these are precisely the skills required to create artificial general intelligence, or AGI: human-level understanding and reasoning. The IMO, held this year on Australia's Sunshine Coast, is the world's premier math competition for high schoolers, bringing together top contenders from more than 100 countries. All are given the same six problems—three per day, each worth seven points—to solve over two days. But these problems are nothing like what you probably remember from high school. Rather than a brief numeric answer, each demands sustained reasoning and creativity in the form of a pages-long written proof. These logical, step-by-step arguments have to span many fields of mathematics —exactly the sort of problems that, until just this year, AI systems failed at spectacularly. The OpenAI team of researchers and engineers—Alex Wei, Sheryl Hsu and Noam Brown—used a general-purpose reasoning model: an AI designed to 'think' through challenging problems by breaking them into steps, checking its own work and adapting its approach as it goes. Though AI systems couldn't officially compete as participants, the notoriously tough test served as a demonstration of what they can do, and the AIs tackled this year's questions in the same test format and with the same constraints as human participants. Upon receiving the questions, the team's experimental system worked for two 4.5‑hour sessions (just as the student contestants did), without tools or the Internet—it had absolutely no external assistance from tools such as search engines or software designed for math. The proofs it produced were graded by three former IMO medalists and posted online. The AI completed five of the six problems correctly, receiving 35 out of 42 points—the minimum required for an IMO gold medal. (Google's DeepMind AI system also achieved that score this year.) Out of 630 competitors, only 26 students, or 4 percent, outperformed the AI; five students achieved perfect 42s. Given that a year ago language-based AI systems like OpenAI's struggled to do elementary math, the results were a dramatic leap in performance. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. In the following conversation, Scientific American spoke with two members of the OpenAI team, Alex Wei and Sheryl Hsu, to discuss how they conducted their work, why the model's lack of response to the sixth question was actually a major step toward addressing AI's 'hallucination' problem and how developing a system capable of writing complex proofs could help lead to artificial general intelligence. [ An edited transcript of the interview follows. ] What led you to suddenly begin preparing an AI model for the IMO just a few months before the competition? What was the spark? WEI: I had been thinking about math proofs for quite a while. I'm on a team at OpenAI called MathGen. We had just seen the results progress a lot. We felt like we had a shot to get a model that could do really well at the IMO, and we wanted to make a mad dash to get there. HSU: I used to do math competitions. [Wei] used to do math competitions—he was a lot better than me. The IMO is definitely well known within the [AI research] community, including among researchers at OpenAI. So it was really inspiring to push specifically for that. Can you talk about your decision to work with a general‑purpose AI system rather than a system that was specifically designed to answer math problems? WEI: The philosophy is that we want to build general‑purpose AI and develop methods that don't just work for math. Math is a very good proving ground for AI because it's fairly objective: if you have a proof, it's easier to get consensus on whether it's correct. That's harder for, say, poetry—you'll have more disagreement among readers. And IMO problems are very hard, so we wanted to tackle hard problems with general‑purpose methods in the hope that they'll also apply to domains beyond math. HSU: I'd also say the goal at OpenAI is to build AGI—it's not necessarily to write papers or win competitions. It was important that everything we did for this project also be useful for the bigger goal of building AGI and better models that users can actually use. In what ways could a reasoning model winning a gold in the IMO help lead to AGI? WEI: One perspective is to think in terms of how long tasks take. A year ago, ChatGPT could only do very basic math problems. Two years ago—and even a year and a half ago—we were often thinking about grade‑school math problems you'd find on fifth‑grade homework. For someone really good at math, those take a second or two to read and solve. Then we started evaluating using AIME [the American Invitational Mathematics Examination, a 15-question high school math contest]. That takes around 10 minutes per problem, with about three hours for 15 problems. The IMO is four and a half hours for just three problems—that's 90 minutes per problem. ChatGPT started off being good for quick questions. Now it's better at longer‑running tasks, such as 'Can you edit this paragraph for me?' As AI improves, you can expand the time horizon of tasks, and you can see that progression clearly in math. HSU: Another aspect is that reasoning models were previously very good at tasks that are easy to verify. If you're solving a non‑proof‑based math problem, there's one numerically correct answer. It's easy to check. But in the real world—and in the tasks people actually want help with—it's more complex. There's nuance: maybe it's mostly correct but has some errors; maybe it's correct but could be stylized better. Proof‑based math isn't trivial to evaluate. If we think about AGI, those tasks won't be easy to judge as correct or not; they'll be more loosely specified and harder overall. What was the process for training the model? WEI: In general, reinforcement learning trains a model by rewarding good behavior and penalizing bad behavior. If you repeatedly reinforce good behavior and discourage bad behavior, the model becomes more likely to exhibit the good behavior. HSU: Toward the end, we also scaled up test‑time compute [how long the AI model was able to 'think' before answering]. Previously, for a human, problems of this sort might be a few minutes; now we were scaling to hours. That extra thinking time gave surprising gains. There was a moment when we ran evaluations on our internal test set that took a long time because of the increased test‑time compute. When we finally looked at the results—and Alex graded them—seeing the progress made me think gold might be within reach. That was pretty exciting. On the IMO test, the model you developed got five out of six answers correct. But with the sixth question, the model didn't try to provide an answer. Can you tell me more about the significance of this response? WEI: The model knowing what it doesn't know was one of the early signs of [progress] we saw. Today if you use ChatGPT, you'll sometimes see 'hallucinations'—models don't reliably know when they don't know. That capability isn't specific to math. I'd love it if, for everyday questions, the model could honestly say when it doesn't know instead of giving an answer I must verify independently. What kind of impact could your work on this model have on future models? HSU: Everything we did for this project is fairly general‑purpose—being able to grade outputs that aren't single answers and to work on hard problems for a long time while making steady progress. Those contributed a lot to the success here, and now we and others at OpenAI are applying them beyond math. It's not in GPT‑5, but in future models, we're excited to integrate these capabilities. WEI: If you look at the solutions we publicly posted for the IMO problems, some are very long—five to 10 pages. This model can generate long outputs that are consistent and coherent, without mistakes. Many current state‑of‑the‑art models can't produce a totally coherent five‑page report. I'm excited that this care and precision will help in many other domains.

Veteran analyst makes staggering AI spending prediction
Veteran analyst makes staggering AI spending prediction

Miami Herald

time3 hours ago

  • Miami Herald

Veteran analyst makes staggering AI spending prediction

It's a gold rush like nothing seen since the dawn of the Internet. Virtually every business is rushing to capitalize on artificial intelligence, which caught fire after OpenAI's ChatGPT became the fastest app ever to reach one million users when it was launched in 2022. It doesn't matter if it's retail or banking; every industry is knee-deep in figuring out how AI may reshape businesses and boost profitability. In short, AI has moved from science fiction to everyday use, not just for students looking to hack their homework. Manufacturers are using it to improve supply chains and quality, retailers are using it to boost sales and prevent theft, healthcare companies are using it to design better medicines, and finance is using it to hedge risks. Even the military is in on the action, exploring its use on the battlefield. The flurry of activity has meant a tidal wave of demand for the computers necessary to crunch AI's heavy workloads, a task ill-suited to the legacy central processors found in most data centers. As a result, companies are spending big money on infrastructure, padding profits for AI darlings like Nvidia and Palantir, and sending their stocks soaring. The situation isn't lost on long-time veteran analyst Sam Stovall. Stovall, CFRA's Chief Investment Strategist, has been tracking stocks for over 30 years, and his team thinks there's still a lot of runway left regarding the data center buildout. In the gold rush, it was said that most of the money was made by those selling the picks and shovels to those traveling west in search of riches. The modern-day equivalent could be players like Nvidia (NVDA) , Broadcom (AVGO) , AMD (AMD) , and Super Micro Computer (SMCI) , companies at the heart of supplying the demand for faster, more efficient data center infrastructure. Related: Nvidia AI outlook resets after Meta Platforms, Microsoft update plans The darling, of course, is Nvidia. Nvidia's highly optimized graphics processors for gaming and crypto mining proved the perfect tool for data hogging AI apps, causing sales to surge after ChatGPT's launch shocked the planet and caused a major re-shift in IT budget priorities. Sales of Nvidia's GPUs -- first the H100, then the H200, and now its Blackwell lineup -- have grown from below $27 billion in 2022 to a stunning $130 billion last year. The reason? Its GPUs are paired with next-gen CUDA software optimized to keep workflows humming along more efficiently than any other chip option on the planet. Availability, however, has been troublesome. Nvidia's demand has been higher than it can meet, and data centers have gotten creative, leveraging specialty chips, such as ASICs made by Broadcom and others, to fill in the gaps. AMD also recently joined the party, launching its own GPU line-up in a bid to carve away some of Nvidia's whopping 90% share of the AI chip market. It's not just chips, though. It's the entire stack. And servers that can handle all the activity are also being bought by the pallet. Super Micro, a specialty server player with a liquid-cooled lineup, has seen its revenue surge to nearly $22 billion from just a shade over $5 billion in 2022. Investors have been handsomely rewarded for owning the likes of Nvidia, Super Micro, and Palantir, which provides the platform many companies use to build their AI apps. Since 2022, those three companies have generated returns of 1,100%, 2,330%, and 421%, respectively, for investors. More AI Stocks: Nvidia quietly buys more stock in AI infrastructure favoriteVeteran analyst names 30 AI stocks shaping future of technologyCathie Wood buys $12 million of tumbling AI stock It hasn't been a straight line higher, given some nerve-racking pauses and dips along the way. Based on Stovall's analysis, it's certainly possible we're about to experience another pullback in IT stocks soon. Still, the longer-term outlook is pretty compelling. "We continue to see upside to consensus estimates over the next 2-3 years, with annualized earnings growth of at least 15%-20% validating the sector multiple with higher AI infrastructure spend to remain the core driver to growth," wrote Stovall in a note to clients. "We think data center spending is poised to more than double to over $1 trillion by 2028, with the Big Four stocks representing about half of the total as the customer base broadens." A doubling in spending by 2028 is pretty eye-popping, given what we've already witnessed. If Stovall is correct, many people may be surprised, given arguments that IT stocks, including Nvidia, Palantir, and rivals, are overpriced. During the Internet boom, stocks surged too far too fast, pricing in tremendous growth and setting the bar incredibly high. As a result, I saw many of the high-flyers cut in half or worse, go out of business, despite the fact that spending to build out the Internet was continuing to climb. Don't miss the move: Subscribe to TheStreet's free daily newsletter There will undoubtedly be winners and losers, making quality key. Investors in Lycos and never recovered, but those who bought and held Amazon through thick and thin have made unfathomable wealth. Anything can happen, but it wouldn't surprise me if quality again comes out on top. Those companies with the size and pockets deep enough to pivot and expand will likely flourish, while those losing money and operating on shoestring budgets could wind up in Wall Street's proverbial dustbin. That said, Stovall's outlook is largely optimistic for earnings growth for AI-related companies, and revenue and profit growth are the lifeblood of stock prices. "We continue to have a positive fundamental outlook on the sector," wrote Stovall. "But acknowledge that some digestion of recent gains is warranted. We note that the sector P/E multiple on next 12-month estimates is near the high-end of its 20-year historical range." Cheap? Nope. But that may not mean any short-term downside is more than a pullback in an uptrend. The music may stop at some point, but nobody knows when. As far as CFRA is concerned, it still rates the information technology sector an overweight, despite the risk of short-term weakness. Related: Nvidia's China deal controversy puts US officials on their back feet The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

ORMCO™ UNVEILS SPARK™ STAGERX™, THE NEXT EVOLUTION OF ALIGNER TREATMENT PLANNING
ORMCO™ UNVEILS SPARK™ STAGERX™, THE NEXT EVOLUTION OF ALIGNER TREATMENT PLANNING

Yahoo

time4 hours ago

  • Yahoo

ORMCO™ UNVEILS SPARK™ STAGERX™, THE NEXT EVOLUTION OF ALIGNER TREATMENT PLANNING

- Enhanced Treatment Accuracy Through a Powerful, Visual, and Streamlined Modular Workflow - BREA, Calif., Aug. 21, 2025 /PRNewswire/ -- Ormco Corporation, providing innovative products, solutions, and education to the orthodontic community for over 60 years, today announces the launch of Spark™ StageRx™, a groundbreaking visual workflow revolutionizing aligner treatment planning. This new tool not only improves clinical communication but also brings more efficiency to the doctor's workflow, enhancing treatment accuracy through standardized and modular protocols. "StageRx enables the peer-to-peer sharing of aligner protocols, fostering a vibrant community of aligner knowledge among practitioners, and 89% of doctors stating they are likely to recommend StageRx to their colleagues.1 Based on initial customer feedback, we are seeing this platform make a meaningful impact in practices," said Jay Issa, Vice President, Portfolio Management and Global Marketing, Ormco. "It comes with Spark Aligner protocols created by top global aligner experts, making it a helpful tool for both novice and advanced aligner doctors." Key Features & Benefits of Spark StageRx: Standardize communication with designers in a non-textual format. 84% of doctors agree StageRx has improved communication with treatment designers.¹ Protocol library provides select Spark standard and expert protocols, as well as your own creation for easy access and reuse. Visualize your setup with an easy drag and drop staging workspace. "This has completely transformed our clear aligner treatment planning process. It's visual, intuitive, and easy to use—you simply drag and drop," said Dr. Nadia Nizam.2 "The platform enables a smarter, more collaborative workflow by eliminating design inconsistencies and standardizing communication, all without sacrificing customization. My team can tailor treatment for each patient by modifying or adding steps, which are then submitted directly through the Spark DTX portal—ensuring both precision and consistency." For more information on Spark StageRx, visit Orthodontic Software | Spark™ Clear Aligner System | Ormco. About the Spark™ Clear Aligner SystemSpark Aligners are manufactured by Ormco, a global leader in innovative orthodontic products with 60 years of expertise, R&D, and high manufacturing standards. Ormco has helped doctors treat more than 20 million patients in more than 140 countries. Spark Approver Software is designed to give doctors more control and flexibility, while Spark's advanced aligner technology and TruGEN™ material provide more sustained force retention. Compared to the leading aligner brand and when contacting the same tooth, the Spark Aligner has 18% better surface contact with the tooth and is also designed to be more clear and more comfortable than the leading aligner brand and stain less than the leading aligner material -- which may be why 100% of patients recently surveyed said they would recommend Spark Aligners to a friend.3 For more information about Spark Aligners, visit About OrmcoEnvista is a global family of more than 30 trusted dental brands, including Ormco, Nobel Biocare, DEXIS, and Kerr, united by a shared purpose: to partner with professionals to improve lives. Ormco, headquartered in Brea, Calif., is a global leader and innovator of orthodontic products and solutions to help enhance the lives of its customers and their patients. For more than 60 years, Ormco has partnered with the orthodontic community to help create over 20 million smiles in over 140 countries. Distinguished products range from twin brackets (Symetri™ Clear Brackets, Titanium Orthos™, and Mini Diamond™) to pioneering self-ligating appliances with the Damon™ System (including Damon Ultima™ System and Damon™ Clear2). The Spark™ Clear Aligner System is designed to meet the needs of the orthodontist with the TruGEN™ material and Approver Software. Ormco's Insignia™ Advanced Smile Design™ provides an all-inclusive customized indirect bonding solution for efficiency through personalization. From personalized service to professional education programs and marketing support, Ormco is committed to helping orthodontists achieve their clinical and practice management objectives. Connect on Facebook at and LinkedIn at 1Data on file. Based on a survey of doctors completed in February 2025.2Dr. Nadia Nizam is a paid consultant for Ormco. The opinions expressed are those of the doctor. Ormco is a medical device manufacturer and does not dispense medical advice. Clinicians should use their own judgment in treating their patients.3Marketing Confirmation Study-Data on File with Ormco Corporation. View original content to download multimedia: SOURCE Ormco Corporation Effettua l'accesso per consultare il tuo portafoglio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store