logo
Former NSA Director and SandboxAQ CEO on Quantitative AI and its inevitable integration

Former NSA Director and SandboxAQ CEO on Quantitative AI and its inevitable integration

CNBC22-05-2025
Former NSA Director Admiral Michael Rogers and SandboxAQ CEO Jack Hidary join the 2025 CNBC CEO Council Summit to discuss Quantitative AI, the evolution of the tech and future integration possibilities.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Adoption Is Racing Ahead but Security Is Still Struggling to Keep Up
AI Adoption Is Racing Ahead but Security Is Still Struggling to Keep Up

Newsweek

time2 days ago

  • Newsweek

AI Adoption Is Racing Ahead but Security Is Still Struggling to Keep Up

Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Artificial intelligence is no longer an emerging technology; it's a business imperative. Companies are embedding AI into customer service, product design, logistics and R&D, often at a breathtaking pace. But while companies are eagerly adopting AI, the safeguards to protect these robust systems are often left behind. A new AI Security Benchmark Report from SandboxAQ, a company delivering solutions at the intersection of AI and quantum techniques, reveals a growing divide between AI adoption and AI-specific security readiness. Marc Manzano, general manager of the Cybersecurity Group at SandboxAQ, calls this the "confidence paradox," where most leaders feel secure, yet few have tested those defenses. "The biggest red flag was what we call the confidence paradox. Most security leaders feel confident about their AI defenses, but the data shows otherwise. Seventy-seven percent of leaders feel secure, but 72 percent told us that they haven't run a single comprehensive AI risk assessment. We call it a pattern of 'unverified confidence,'" Manzano told Newsweek. Racing Ahead Without a Map One of the main drivers of this gap is speed. Businesses are under immense pressure to adopt AI, whether to keep up with competitors, cut costs or unlock new capabilities. But this rapid rollout is often happening without corresponding investments in protection. Marc Manzano, general manager of the Cybersecurity Group at SandboxAQ, explained to Newsweek how "most security leaders feel confident about their AI defenses, but the data shows otherwise." Marc Manzano, general manager of the Cybersecurity Group at SandboxAQ, explained to Newsweek how "most security leaders feel confident about their AI defenses, but the data shows otherwise." Newsweek Illustration/Canva/Getty "AI adoption is accelerating rapidly, driven by business pressure, not security readiness," Manzano said. "Traditional security tools weren't built for autonomous systems that make decisions and communicate on their own.…Many are just extending old playbooks to new systems, which doesn't work." That "old playbook" problem is widespread. The report found that only 6 percent of companies have AI-specific security in place. The rest are relying on IT or security teams whose expertise was built for human-driven workflows, not machine-speed decision-making. "Right now, only 10 percent of companies have dedicated AI security teams. In most cases, responsibility falls to existing security or IT teams, which may not have the right tools or expertise," Manzano explained. "AI security is not just a technical challenge; it is an organizational one." John Heasman, chief information security officer at Proof, an identity verification network, told Newsweek that while many companies extend third-party risk management (TPRM) to AI vendors, "companies should already have a robust third-party risk management process to assess cybersecurity measures vendors have in place to protect their data. This can be extended to place additional emphasis on how the vendor performs data governance and security around AI, [for example], 'Is our data used for model training?' and 'What measures are in place to protect the integrity of the AI models and data?'" The New Attack Surface One of the most urgent issues in AI security is the rise of nonhuman identities, the API keys, tokens and certificates that allow machines to access systems and data. "In the past, hotels only handed out keys to people—staff and guests—who needed access to specific rooms," Manzano said. "But today, it's not just people checking in. There are cleaning robots, food delivery drones, automated systems, and AI agents—all of them need keys to do their jobs.…Unlike people, they don't sign in at the front desk, and they never check out." The problem is that many organizations lack a complete inventory of these "keys" and are unaware of who—or what—is using them. That lack of visibility makes it difficult to enforce proper access controls or detect when credentials have been stolen or copied. And as these identities multiply, they become one of the fastest-growing, least-monitored attack surfaces. Heasman noted that one way to reduce risk is by controlling AI system integration points. "One thing within the control of IT and security teams is how the integration of AI systems occurs, [for example] what sources of data an AI system may have access to, how it accesses that data and how end users interact with the system. Teams can greatly lower risk by adhering to tried and tested principles such as least privilege and strong logging, and bringing in companies to perform security testing, penetration testing, to find weaknesses." Manzano warns that attackers can exploit these machine identities with unprecedented speed. "AI agents are designed to navigate [networks]. They can test thousands of credentials, connections and permissions in minutes to find one way in that a human might never spot.…By giving these AI agents long-lived credentials, companies are giving a super-intelligent bloodhound a master key and telling it to find every unlocked door in our entire estate." Inside a Company Trying to Get It Right Albert Invent, a company building AI tools for the chemical and materials science industry, is already confronting these challenges head-on. Its leadership understands the dual risks of AI: internal use of AI tools that could expose intellectual property and customer-facing AI products that must be safeguarded against misuse. "We have two categories: For internal tools like ChatGPT and code assistants, we use enterprise-grade providers with clear data policies and follow a 'least exposure principle'—only sharing minimal data needed. For AI tools we build for customers, we've implemented strict access control layers, prompt guardrails and citation systems to prevent hallucinations while protecting chemical IP," Nick Talken, CEO of Albert Invent, told Newsweek. Talken also recognizes that AI-specific risks require AI-specific defenses. "We're transitioning from general security to AI-specific audits as our platform scales," he explained. "We've implemented multiple layers—strict access controls, prompt injection detection, hardened knowledge systems—and we're continuously adding controls like use-case/abuse-case reviews, anomaly detection and comprehensive input/output monitoring." Rather than assuming its defenses are airtight, the company takes a more cautious stance. "AI security is evolving so rapidly that we stay humble about what we don't know yet," Talken said. "Rather than being confident, I'd say we're committed to staying vigilant and adapting as new risks emerge." Why Traditional Tools Fall Short AI systems differ fundamentally from traditional IT infrastructure. They make autonomous decisions, interact dynamically with other systems and generate outputs that can be manipulated in novel ways. This means security incidents can unfold much faster, and with less visibility, than in human-driven environments. "AI introduces risks that move faster and are harder to see," Manzano said. "Traditional tools rely on human behavior patterns, predictable workflows and periodic reviews.…The tools and controls built for human users via endpoint devices like laptops and smartphones don't apply well here." A lack of industry standards further complicates the mismatch between old defenses and new threats. Without clear guidance, many companies are unsure how to begin securing AI, let alone maintain that security over time. A Machine-Speed Threat Landscape The report's conclusion is blunt: The nature of cyberattacks is changing, and companies that cling to reactive, compliance-driven approaches will be outpaced. "We are now in an era of machine-speed threats, and a reactive security posture focused on compliance and post-breach response is a losing strategy," Manzano said. "Companies can't wait to see what will happen here. This is a now problem." Proactive measures start with visibility, knowing exactly where AI is deployed, what data it touches and what systems it connects to. From there, organizations can close gaps, enforce tighter access controls and continuously test for vulnerabilities. "Capabilities around visibility into AI tools varies by vendor, so right now, companies need to take what they can get, and advocate for greater visibility where it's lacking," Heasman said. "Like any system, it makes sense to centralize logs—[for example], in a SIEM, or data analytics platform where the security and IT team can query them—and determine a baseline for normal behavior then set up alerts when there is significant deviation from this." But as Manzano points out, not every company has the resources to do this alone. In those cases, partnering with outside experts or deploying specialized tools may be the only way to keep up. "Many organizations don't have the resources to manage this in-house. In those cases, companies will need to deploy solutions that can both uncover where a company is at risk and take steps to close those security gaps. Proactive security measures are the only way to survive in this era of machine-speed attacks," he said. Ultimately, the goal of the research is to shift leaders' mindsets from misplaced confidence to genuine capability. "Too many leaders assume their existing tools are good enough," Manzano said. "This report shows that assumption is wrong, and business leaders need to make changes before it is too late."

Black Hat and DEF CON offer lessons for Congress
Black Hat and DEF CON offer lessons for Congress

Politico

time7 days ago

  • Politico

Black Hat and DEF CON offer lessons for Congress

With help from John Sakellariadis Driving the day — After a week in Las Vegas talking to some of the top cybersecurity professionals in the country, your host compiled three key takeaways from Black Hat and DEF CON for federal lawmakers. HAPPY MONDAY, and welcome to MORNING CYBERSECURITY! John and I are back on the East Coast after Black Hat and DEF CON. By the end, I successfully convinced a small group of reporters to go to The Rainforest Cafe with me. It was a real bonding moment. Anyone else get into any fun side quests? Drop me a line at dnickel@ Follow POLITICO's cybersecurity team on X at @RosiePerper,@johnnysaks130,@delizanickel and @magmill95, or reach out via email or text for tips. You can also follow @POLITICOPro on X. Want to receive this newsletter every weekday? Subscribe to POLITICO Pro. You'll also receive daily policy news and other intelligence you need to act on the day's biggest stories. On The Hill HIGHLIGHTS — Cybersecurity professionals converged on Las Vegas last week for the back-to-back Black Hat and DEF CON cyber conferences. Top cyber experts from the private and public sectors highlighted some of the biggest challenges and innovations in the industry today, from the scope of U.S. adversaries' cyber capabilities to how federal cuts are impacting the country's cyber posture. — No target is 'too small': During the conference, the cybersecurity community detailed that foreign adversaries such as China have the hacking capabilities to cast a wide net in cyberspace. And no entity involved in vital U.S. defense operations is too small to be considered an attractive target for foreign hackers, according to a top NSA official. 'China's hacking resources outnumber those of the U.S. and allies combined, and China has stolen more corporate data from the United States than any other nation in the world,' said Bailey Bickley, chief of defense industrial base defense at the NSA's Cybersecurity Collaboration Center, which helps U.S. defense contractors protect their networks. Bickley said during a Black Hat keynote that many of the 'small companies' in the defense industrial base often 'think what they do is not important enough' to be targeted by Chinese-backed hacking groups. 'But when you have the significant resources like that to conduct mass-scanning and mass exploitation, there is no company and no target too small,' she said. Cybersecurity experts have long warned that any device connected to the internet can be a key gateway for hackers to breach a network. At DEF CON on Sunday, officials from the U.S. Coast Guard disclosed last year's discovery of a little-known device that potentially left ports across the country exposed to Chinese hacking. Kenny Miltenberger, a lieutenant commander for a Coast Guard cyber protection teams, told John that many of the port operators weren't even aware of the devices, cellular modems embedded in cranes and support infrastructure made by a Chinese company. 'The good news story here is we're finding a lot of vulnerabilities on cranes, and our partners are mitigating those,' Miltenberger said on the Coast Guard's work to help operators find and plug cyber vulnerabilities. — 'Too close to the bone': Meanwhile, experts spoke out about the impacts of the Trump administration's cuts to federal cyber agencies, including CISA and the Pentagon's Cyber Command and Defense Information Systems Agency. 'You've cut assets at CISA and in other places … we're less able to make a government contribution to the collective welfare of the cyber ecosystem,' said Chris Inglis, former national cyber director, on the sidelines of Black Hat. Congress has previously examined how cuts to the federal cyber workforce and funding could hurt the country's ability to defend itself long-term. Inglis added that though he isn't aware of the specific number of positions cut from cyber agencies, 'I do think we should be concerned about cutting those capacities too close to the bone.' — AI everywhere: The role that artificial intelligence is playing in global cyber operations was one of the key topics of discussion. Michael Leland, field CTO of cyber firm Island, told your host at Black Hat that the U.S. and its adversaries see AI as 'the great force multiplier.' 'AI is going to be used as a tool, both by the [U.S. government], but the adversaries are leveraging it against us in the opposite way,' he said, highlighting reports of adversarial use of AI — including Russian-linked hackers using the tech to spread disinformation and Iranian-linked hackers using AI to ramp up the scope of phishing campaigns. At DEF CON on Friday, the Defense Advanced Research Projects Agency announced the winners of a years-long competition to build AI models to autonomously find and patch vulnerabilities in open-source code used across critical infrastructure networks. For experts like John Watters, CEO of cyber risk firm iCounter, entities will have to walk a fine line of embracing AI while also ensuring it is implemented securely. 'AI is being deployed everywhere by everybody, individually,' he told your host. 'AI [is being] fully embraced by adversaries, and defenders are still kind of looking at, 'What's the risk of doing it? And [how] should it be regulated?' At the Agencies COAST TO COAST — Hackers at DEF CON concentrated their efforts on attacking a 20-foot Coast Guard Auxiliary vessel, a self-driving Pentagon naval drone and software from the largest U.S. ports. John reported on Sunday that the decision to focus hacker efforts at the conference on maritime vessels and technology signals two pressing worries: that recreational boats, U.S. ports and commercial ships are increasingly hooked up to the internet, and that the U.S. military isn't prepared for the cyber assault many expect China to launch if it invades Taiwan. 'The South China Sea is where the next generation of autonomous warfare and homeland defense will be demonstrated,' said Duncan Woodbury, executive director of the conference's Maritime Hacking Village — which made its debut at this year's conference. — Boosting security: The demonstration at DEF CON comes as the U.S. Coast Guard implemented a final rule last month that requires U.S.-flagged ships and ports to beef up their network defenses. The Coast Guard also received an injection of $25 billion in Congress' recent reconciliation bill, which will enable it to ramp up its cybersecurity measures. Critical infrastructure TSUNAMI OF THREATS — A small army of hackers are coming together to protect one of the country's most vulnerable sectors: water systems. Jake Braun, former acting principal deputy national cyber director, unveiled a new group at this year's DEF CON to help beef up water systems' cyber defenses. 'We've both the urgency of the threat and the potential of a community-driven solution,' Braun, co-founder of the group, told your host. He added that the new group, known as DEF CON Franklin, was unveiled after a nine-month pairing period of DEF CON hackers with small water utilities across four states. Cyberattacks on water treatment facilities could lead to potentially fatal consequences, including shutting off water access completely to altering the chemical balance of the water. — Zoom out: Cyber experts have signaled that the water sector faces significant gaps in cybersecurity funding and expertise, which was demonstrated by cyberattacks against water facilities in the U.S. by Iranian-linked hackers in 2023. 'Protecting our nation's critical infrastructure isn't a want but a necessity, and for the nearly 50,000 water systems nationwide, they need the tools and resources to not only be cyber aware but cyber resilient,' said Matt Holmes, CEO of the National Rural Water Association. Hacked INFORMANTS AT RISK — Officials are worried that Latin American drug cartels are among the groups that obtained sensitive court information from a massive breach of the federal judiciary's case filing system. John and POLITICO's Josh Gerstein reported on Friday that judicial officials with knowledge of the cyberattack are concerned cartels could weaponize the stolen data to identify witnesses in cases the federal government has opened against them or gain insights on criminal investigations, such as arrest and search warrants. — Zoom out: As of Friday, at least a dozen district courts across the country are believed to have been directly impacted by the ongoing breach of the digital case filing system — known as CM/ECF. The incident is believed to be one of the most severe cyberattacks on a federal court filing system in years. While it's unclear if the cartels were directly connected to the breaches, it's possible that they could purchase data from the hackers with access to the system. Quick Bytes TROLL TOLL — Cybersecurity researchers unmasked a prolific scamming operation that involves the infamous unpaid toll or undelivered mail item texts, writes Zack Whittaker for TechCrunch. 'ANSWER MACHINE' — President Donald Trump's new AI search tool on Truth Social contradicts the president by saying the 2020 election wasn't stolen, and the Jan. 6, 2021, Capitol riot was violent and linked to Trump's 'baseless claims of widespread election fraud,' reports Drew Harwell for The Washington Post. PRIVACY PLEASE — Apple's AI ecosystem, known as Apple Intelligence, routinely transmits sensitive user data to company servers beyond what its privacy policies inform, cyber firm Lumia Security revealed, Greg Otto writes for CyberScoop. INVESTING IN THE CYBER TALENT POOL — The non-profit NobleReach foundation last week announced its second-ever cohort of public-interest cyber and emerging tech scholars. The 28 recent graduates and earlier-career professionals will take up one-year posts at places including the Department of Defense, city of Pittsburgh, and Oklahoma City, and the state of Maryland. Chat soon. Stay in touch with the whole team: Rosie Perper (rperper@ John Sakellariadis (jsakellariadis@ Maggie Miller (mmiller@ and Dana Nickel (dnickel@

'Tracking Everyone, All the Time': What Americans Need To Know About Israel's Secret Eavesdropping Program
'Tracking Everyone, All the Time': What Americans Need To Know About Israel's Secret Eavesdropping Program

Yahoo

time09-08-2025

  • Yahoo

'Tracking Everyone, All the Time': What Americans Need To Know About Israel's Secret Eavesdropping Program

Nowadays, it seems that the limit to government surveillance is neither the law nor technological capabilities; it's storage space. In the 1990s, the U.S. National Security Agency was "annually converting more than 22 million pounds of secret documents into cheap, soluble slurry" in order to make room for more, according to Body of Secrets by James Bamford. In 2014, the NSA spent $1.5 billion on a massive data center in Utah riddled with electrical problems. But Unit 8200, the Israeli equivalent of the NSA, seems to have figured out a simple workaround for the problem: Contract it out to private industry. A joint investigative report by The Guardian and the Israeli publications +972 Magazine and Local Call revealed on Wednesday that Unit 8200 has been storing massive amounts of intercepted phone audio on Microsoft's Azure cloud service. Microsoft, which pleaded ignorance of what the Israeli government was using its servers for, is not the only American institution involved in setting up the program. Its architect, who trained under U.S. military instructors, may have created a blueprint for future mass surveillance in other countries. The cloud-powered surveillance program was the brainchild of Brig. Gen. Yossi Sariel, the former commander of Unit 8200. Sariel spent 2019 at the National Defense University, a U.S. Department of Defense academy for American and foreign national security professionals, The Washington Post reported last year. In 2020, he "returned to Israel brimming with plans," according to the Post, and took command of Unit 8200 from 2021 until last year. One of those plans, this week's reporting revealed, was to work with private cloud providers. Under Sariel's tenure, Unit 8200's ability to retain and process audio data massively increased. The unit has gone from wiretapping tens of thousands of subjects to recording millions of people's calls, according to the report. Unit 8200 officers told The Guardian and +972 that the unofficial mantra of the project was "a million calls per hour." (The combined population of Israel and the Palestinian territories is 14 million.) Leaked files suggest that Unit 8200 had a goal of storing 70 percent of its data on Azure and that the Israeli military already had 11,500 terabytes of data in total stored on an Azure server in the Netherlands by July 2025. That would be the equivalent of 200 million hours of audio, although it's not clear how much of those 11,500 terabytes comes from Unit 8200's phone intercepts. Microsoft confirmed that Unit 8200 was a customer of its data security services but said that it had "no information" about the data stored on its servers. After the report was published, the Israeli military put out a statement claiming that "Microsoft is not and has not been working with the [Israel Defense Forces] on the storage or processing of data." Even before the surveillance revelations, the relationship between Microsoft and the Israeli government was a subject of controversy. Several Microsoft employees have been fired for publicly protesting over the issue. Most recently, engineer Joe Lopez was fired in May 2025 after shouting "Microsoft is killing Palestinians" during CEO Satya Nadella's keynote speech. Beyond the specifics of the Israeli-Palestinian conflict, however, the new reporting carries some lessons about the future of surveillance. Monitoring begets more monitoring; governments set up dragnets in response to security threats, then realize how useful the data is for other goals. The cycle used to be limited by data storage capacity. Nowadays, private industry is increasingly willing and able to fulfill governments' appetite for surveillance capacity, though it's also afraid of bad press and legal liability. Sariel became interested in mass surveillance data collection after a 2015 wave of lone-wolf Palestinian attacks that were impossible to predict by conventional investigative techniques. Someone "decides to perpetrate an attack using a kitchen knife to stab a victim, or the family vehicle to run people over," Sariel wrote in 2021 under a pseudonym, which The Guardian exposed as his last year. "Sometimes the person doesn't even know a day before that he or she is going to commit such an attack. In these cases, traditional intelligence agencies are helpless. How can such an attack be predicted or prevented?" Intelligence officers told +972 Magazine that Sariel became obsessed with "tracking everyone, all the time," and "suddenly, the public became our enemy." While some officers insisted that the surveillance dragnet has saved lives, another portrayed it as a machine that constantly creates pretexts for more aggressive action. "These people get entered into the system, and the data on them just keeps growing," an intelligence officer who recently served in the West Bank told +972. "When they need to arrest someone and there isn't a good enough reason to do so, [the surveillance repository] is where they find the excuse. We're now in a situation where almost no one in the [Palestinian] territories is 'clean,' in terms of what intelligence has on them." For all its technical sophistication, Israeli intelligence failed to anticipate the Hamas attacks of October 7, 2023. (In fact, Israeli leaders' overconfidence in their high-tech defense may have led them to miss signs of an impending war.) Yet those attacks became another reason to expand the data collection, because the army was preparing to take "long-term control" of Gaza, an intelligence officer told +972. Israeli officials were also uncomfortable with putting such sensitive data in the hands of a foreign third party. Documents from the Israeli justice ministry, obtained by +972, warned that cloud services based in other countries could be exposed to legal liability or even be forced to hand over intelligence data. After the war in Gaza began, Microsoft officials warned Israeli counterparts not to use its services for lethal military targeting, a source at Microsoft told The Guardian. If recent history is any indication, U.S. officials will be watching these challenges closely. The Bush administration looked to Israeli precedents for justifying war-on-terror measures legally, and American police have often incorporated trips to Israel into their counterterrorism training. The "start-up nation" may soon become a model for public-private surveillance partnerships as well. The post 'Tracking Everyone, All the Time': What Americans Need To Know About Israel's Secret Eavesdropping Program appeared first on Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store