Latest news with #AhmedBanafa


CBS News
24-06-2025
- Business
- CBS News
Bay Area technology experts comment on possibility of cyberattacks following U.S. strikes on Iran
A day after President Trump announced the U.S. struck nuclear facilities in Iran, the Department of Homeland Security warned about the potential for low-level cybersecurity attacks. And even though the US has announced a ceasefire between Israel and Iran, Paul Keener, a strategist for a cyber consulting firm called GuidePoint Security headquartered in Virginia, said the potential attackers don't even have to be Iranian. "That is one of Iran's typical methods, because they use hacktivist groups. And so, the hacktivist groups are people who are ideologically aligned," said Keener. "And so, it creates the appearance of something bigger and more responsive than it might actually be. And so, yeah, is it likely that this will happen? I think so. If they can focus and do that, I would anticipate that." Prof. Ahmed Banafa is a technology expert at San Jose State. He said Iran has engaged in cyber attacks in the past and usually focuses on the kind of infrastructure that can disrupt everyday life. "The war, including what we are seeing here, is actually not just only in the skies or the battlefield itself. It's also on the server rooms and the algorithms and the data centers and the R and D of the companies," Banafa told CBS News Bay Area. "That's going to be more effective than anything else because it creates a lot of damage to a country." "You know, companies like PG&E for example, so they can paralyze the grid," he said. "Number two, they can go after the financial companies, like Stripe and PayPal, which is going to create a lot of panic if people are using those systems. They can go after the research that we have seen, R and D, in many of those companies, especially with artificial intelligence now. Silicon Valley is leading the world there." And it doesn't take sophisticated weaponry to do that. Just having someone open the wrong email or click on the wrong link can allow a hacker to infiltrate a system and potentially bring an operation to a halt. The Department of Homeland Security issued an alert on Sunday saying, "Low-level cyber-attacks against US networks by pro-Iranian hacktivists are likely, and cyber actors affiliated with the Iranian government may conduct attacks against US networks." Stanford professor Allen Weiner said he thinks some fears may be overblown. The former State Department attorney specializes in international conflict resolution, and he said he thinks Iran is more concerned about keeping its own people under control, especially now that Hezbollah has been crippled and Assad's regime in Syria has collapsed. "And so, what that means now," said Weiner, "is that a lot of the allies that Iran had that would have been in a position to put pressure on Israel have been dramatically weakened, leaving Iran itself very exposed." He said Iran's rulers are generally unpopular with the Iranian people, so much of the regime's actions are designed to show strength as a way of keeping their own people in line, and that even before the announced ceasefire, Iran's retaliation had been largely for show. Weiner said, though they could threaten cyber attacks, he believes they may be looking for a way out of the conflict. "I think the Iranians have been signaling that they want to de-escalate," he said, "realizing that they're completely outmatched militarily by the Israelis. And the minute you add the United States to the mix, I think they recognize that a shooting war will go very badly for them."


Forbes
08-05-2025
- Science
- Forbes
AI's Biggest Secret — Creators Don't Understand It, Experts Split
In a recent blog post titled The Urgency of Interpretability, Anthropic CEO Dario Amodei wrote what many in the tech world have hesitated to admit: 'People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.' Anthropic declined to clarify or comment for this article on Amodei's statement – which few can deny is a very provocative statement. It's so provocative that it's reignited debate among AI experts about whether the opacity of today's frontier AI models represents a legitimate technological emergency or simply a transitional phase on the path to maturity. Dr. Ahmed Banafa, a technology expert and engineering professor at San Jose State University, believes Amodei's admission should not be brushed aside. 'Yes, non-techie individuals and investors should be concerned,' he wrote in an email response as did all quoted experts for this article. 'What we're witnessing with AI is a break from the norm in the history of technology. In the past, engineers could explain exactly how a system functioned. Today, with advanced AI models, especially those based on deep learning, we often don't have full visibility into how or why they reach certain conclusions.' Banafa emphasizes that this ambiguity is particularly troubling in high-stakes arenas such as healthcare, law enforcement and finance, where the consequences of machine-generated decisions are significant. 'Being concerned is not the same as being fearful,' he added. 'The AI research community is actively working on solutions... but responsible innovation should be the goal — not just rapid advancement.' Other experts see less reason for alarm and more room for context. Ben Torben-Nielsen, Ph.D., MBA, an internationally recognized AI consultant with two machine learning patents, compares the interpretability dilemma to the evolution of other complex tools. 'Consider fMRI technology,' he stated. 'Most doctors do not understand the intricate physics of how a measured magnetic signal becomes a pixel on a screen. Yet, they use it effectively for diagnostics because they know it works and trust it. To me, AI seems to be on a similar trajectory.' Torben-Nielsen suggests that interpretability may be a temporary concern. 'Once AI systems are sufficiently reliable and we trust them, the demand from the vast majority for deep 'how did it get this answer' explanations will likely fade, much like detailed fMRI physics is not a concern for most clinicians.' But he doesn't dismiss the risks. 'These systems will always have failure modes — unexpected deviations in behavior. What is concerning, and where alarm is justified, is the issue of responsibility. It's profoundly disappointing when AI labs admit, 'we don't even know how it works,' and then make the user bear all the caution. It's like giving someone a 1,000 bhp hyper car with rim brakes and just saying, 'better be careful.'' Julia McCoy, founder of the AI consultancy First Movers, views the interpretability challenge as more of an opportunity than a crisis. 'Dario Amodei's admission is sobering, but it represents opportunity rather than cause for alarm,' she wrote. 'This technological frontier reminds me of previous innovations in history where understanding lagged behind implementation — from electricity to nuclear energy.' Her advice for non-technical professionals? Embrace AI literacy, understand the limitations of today's models and find practical ways to augment human judgment. 'Those who understand both AI's capabilities and its limitations will be uniquely positioned to thrive in this new landscape. I think the real risk isn't AI itself, but remaining on the sidelines during this transformative period.' However Lin Qiao, CEO of Fireworks AI, sees transparency as the linchpin of trust and a prerequisite for widespread AI adoption. 'We have seen many model providers publish papers and open source code to give transparency into the creation process,' Qiao explained. 'Even more important is opening the model weights to the public so the community has the maximum amount of control to examine and steer it. This is the future of model interpretability.' She notes that trust gaps are one of the biggest roadblocks to adoption in enterprise environments. 'In high-stakes fields like healthcare or finance, nobody wants a black box. You need to be able to understand or debug a system before you can trust it.' But Vanja Josifovski, CEO of Kumo and former CTO at Pinterest and Airbnb, argues that our expectation of explainability may need to evolve. 'We're used to intelligence being explainable with a few concise rules,' he noted, 'but what we've built today may not follow that path. Instead, it may be based on billions of micro-decisions encoded in massive matrices. We might never understand it in the way we're used to — and before we do, we might already be on to the next architecture. And yet, the world keeps turning.' One way to synthesize the debate is through a recent post by Hugging Face CEO Clément Delangue on X, in which he wrote: 'Best way to push interpretability: open science and open-source AI for all to learn & inspect!' As the AI field races forward, understanding — or even interpreting — what these systems are doing remains elusive. But that doesn't absolve companies, developers and policymakers. Those individuals are collectively responsible for ensuring that users can trust the outputs, trace the decisions and hold someone accountable when things go wrong. Whether this will require rethinking how we build models — or rethinking how we understand them — remains an open question. But it's one worth asking now, before AI becomes too embedded to pull back.