
Cisco Unveils 2025 State Of AI Security Report
Ahead of GISEC GLOBAL in Dubai from 6-8th May 2025, Cisco has unveiled the findings of its inaugural global State of AI Security report. The report aims to provide a comprehensive overview of important developments in AI security across several key areas: threat intelligence, policy, and research.
Artificial Intelligence AI has emerged as one of the defining technologies of the 21st century, yet the AI threat landscape is novel, complex, and not effectively addressed by traditional cybersecurity solutions. The State of AI Security report aims to empower the community to better understand the AI security landscape, so that companies are better equipped to manage the risks and reap the benefits that AI brings.
Cisco is participating at GISEC GLOBAL 2025 as a Platinum Sponsor, under the theme 'Innovating where security meets the network'. Across its portfolio, Cisco is harnessing AI to reframe how organizations think about cybersecurity outcomes and tip the scales in favor of defenders. Visitors at GISEC will learn how Cisco combines AI within its breadth of telemetry across the network, private and public cloud infrastructure, applications and endpoints to deliver more accurate and reliable outcomes.
'As AI becomes deeply embedded into business and society, securing it must become a top priority,' said Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS. 'As our State of AI Security report indicates, traditional cybersecurity approaches are no longer sufficient to address the unique risks presented by AI. GISEC serves as the ideal platform to discuss the new age of AI-enhanced cybersecurity – bringing together security leaders, innovators, and policymakers who are shaping the region's cyber defense strategies. Through our thought leadership and innovations, we are showcasing at GISEC, Cisco aims to equip organizations with the insights, research, and recommendations they need to build secure and resilient AI systems.'
Findings from Cisco's first State of AI Security report include:
Evolution of the AI Threat Landscape
The rapid proliferation of AI and AI-enabled technologies has introduced a massive new attack surface that security leaders are only beginning to contend with.
Risk exists at virtually every step across the entire AI development lifecycle; AI assets can be directly compromised by an adversary or discreetly compromised though a vulnerability in the AI supply chain. The State of AI Security report examines several AI-specific attack vectors including prompt injection attacks, data poisoning, and data extraction attacks. It also reflects on the use of AI by adversaries to improve cyber operations like social engineering, supported by research from Cisco Talos.
Looking at the year ahead, cutting-edge advancements in AI will undoubtedly introduce new risks for security leaders to be aware of. For example, the rise of agentic AI which can act autonomously without constant human supervision seems ripe for exploitation. On the other hand, the scale of social engineering threatens to grow tremendously, exacerbated by powerful multimodal AI tools in the wrong hands.
Key Developments in AI Policy
The past year has seen significant advancements in AI policy. International efforts have led to key developments in global AI governance. Early actions in 2025 suggest greater focus towards effectively balancing the need for AI security with accelerating the speed of innovation.
Original AI Security Research
The Cisco AI security research team has led and contributed to several pieces of groundbreaking research which are highlighted in the State of AI Security report.
Research into algorithmic jailbreaking of large language models (LLMs) demonstrates how adversaries can bypass model protections with zero human supervision. This technique can be used to exfiltrate sensitive data and disrupt AI services. More recently, the team explored automated jailbreaking of advanced reasoning models like DeepSeek R1, to demonstrate that even reasoning models can still fall victim to traditional jailbreaking techniques.
The team also explores the safety and security risks of fine-tuning models. While fine-tuning is a popular method for improving the contextual relevance of AI, many are unaware of the inadvertent consequences like model misalignment.
The report also reviews two pieces of original research into poisoning public datasets and extracting training data from LLMs. These studies shed light on how easily—and cost-effectively—a bad actor can tamper with or exfiltrate data from enterprise AI applications.
Recommendations for AI Security
Securing AI systems requires a proactive and comprehensive approach. The report outlines several actionable recommendations: Manage risk at every point in the AI lifecycle: Ensure your security team is equipped to identify and mitigate at every phase: supply chain sourcing (e.g., third-party AI models, data sources, and software libraries), data acquisition, model development, training, and deployment.
Ensure your security team is equipped to identify and mitigate at every phase: supply chain sourcing (e.g., third-party AI models, data sources, and software libraries), data acquisition, model development, training, and deployment. Maintain familiar cybersecurity best practices: Concepts like access control, permission management, and data loss prevention remain critical. Approach securing AI the same way you would secure core technological infrastructure and adapt existing security policies to address AI-specific threats.
Concepts like access control, permission management, and data loss prevention remain critical. Approach securing AI the same way you would secure core technological infrastructure and adapt existing security policies to address AI-specific threats. Uphold AI security standards throughout the AI lifecycle: Consider how your business is using AI and implement risk- based AI frameworks to identify, assess, and manage risks associated with these applications. Prioritize security in areas where adversaries seek to exploit weaknesses.
Consider how your business is using AI and implement risk- based AI frameworks to identify, assess, and manage risks associated with these applications. Prioritize security in areas where adversaries seek to exploit weaknesses. Educate your workforce in responsible and safe AI usage: Clearly communicate internal policies around acceptable AI use within legal, ethical, and security boundaries to mitigate risks like sensitive data exposure. 0 0

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tahawul Tech
9 hours ago
- Tahawul Tech
Nintendo Switch 2 Archives
"We're entering this transition from a position of strength and bringing real-world experience to meet the demands of the AI factory". Learn more about @Vertiv's alignment with @nvidia below. #Vertiv #NVIDIA #tahawultech


Arabian Post
11 hours ago
- Arabian Post
X Integrates Polymarket to Let Users Bet on Real-World Events
Elon Musk's social media platform X has entered into a partnership with Polymarket, a cryptocurrency-based prediction market, enabling users to place bets on future events directly through the platform. This collaboration aims to integrate Polymarket's forecasting capabilities with X's user interface, allowing for real-time betting on topics ranging from political elections to economic indicators. Polymarket, established in 2020, operates as a decentralized platform where users can wager on the outcomes of various events using cryptocurrency. The platform utilizes blockchain technology to ensure transparency and security in transactions. By partnering with X, Polymarket seeks to broaden its user base and bring prediction markets into mainstream social media usage. Shayne Coplan, CEO of Polymarket, emphasized the significance of this partnership, stating that it represents a convergence of two platforms committed to truth-seeking and transparency. He highlighted that the integration would provide users with a more interactive and informed experience when engaging with current events on X. ADVERTISEMENT The collaboration is set to introduce features that allow users to participate in prediction markets seamlessly within the X platform. This includes the ability to place bets on live events, access real-time data, and receive AI-generated insights to inform their decisions. The integration is designed to enhance user engagement by combining social media interaction with financial incentives tied to real-world outcomes. Elon Musk has previously expressed interest in the predictive power of markets like Polymarket, suggesting that they can offer more accurate insights than traditional polling methods. By incorporating such a platform into X, Musk aims to provide users with tools that reflect collective intelligence and market-based forecasting. The partnership also aligns with Musk's broader vision of transforming X into a multifaceted platform that extends beyond traditional social media functionalities. By integrating financial services, content creation tools, and now prediction markets, X is positioning itself as a comprehensive digital ecosystem. While the integration promises to offer users new ways to engage with content and events, it also raises questions about regulatory compliance and the potential for market manipulation. Polymarket has faced scrutiny in the past, including a $1.4 million fine from the Commodity Futures Trading Commission in 2022 for operating an unregistered derivatives trading platform. The company has since taken steps to restrict access for U.S. users and ensure compliance with relevant regulations. As the partnership unfolds, both X and Polymarket will need to navigate the complex landscape of financial regulations, user privacy concerns, and the ethical implications of integrating betting mechanisms into social media. The success of this collaboration will depend on their ability to balance innovation with responsibility, ensuring that users can engage with prediction markets in a secure and informed manner. The integration is expected to roll out in phases, with initial features becoming available to select users before a broader launch. Both companies have indicated that they will provide updates on the progress of the integration and any new features that become available.


Martechvibe
18 hours ago
- Martechvibe
AdLift Announces the Launch of Tesseract
Tesseract delivers actionable insights for AI-savvy marketing strategies, whether it's identifying brand mentions in ChatGPT outputs or assessing visibility in Google's AI Overviews. Topics News Share Share AdLift Announces the Launch of Tesseract Whatsapp Linkedin AdLift has announced the launch of Tesseract. Tesseract is a tool designed to help brands, agencies, and marketers track and amplify their presence across the rapidly expanding landscape of Large Language Model (LLM) powered search platforms, such as ChatGPT, Gemini, Google AI Overviews, and Perplexity. AdLift Inc., now part of Liqvd Asia, has been at the forefront of innovation, bringing together talent to deliver the best solutions. With Tesseract, they're taking AI-powered marketing to the next level. As AI reshapes the way consumers find and interact with content, traditional SEO methods are fast becoming obsolete. This technology is built to unlock this new frontier, giving brands real-time visibility into how they are being discovered and represented within AI-powered responses. It helps marketers to not only monitor but also optimise their digital footprint where it counts—in the very engines powering the next generation of search. 'Search is undergoing a seismic shift. The dominance of traditional search engines is being challenged by AI-native platforms that interpret and present information differently,' said Prashant Puri, CEO & Co-Founder of AdLift Inc. 'Brands that don't adapt risk becoming invisible in this new landscape. Tesseract is our answer to this challenge—a revolutionary tool that puts brands back in control of their digital destiny.' ALSO READ: Unlike legacy SEO platforms, Tesseract decodes how LLMs display, prioritise, and contextualise brand content. Whether it's identifying brand mentions in ChatGPT outputs or assessing visibility in Google's AI Overviews, the platform delivers actionable insights for AI-savvy marketing strategies. 'AI agents are the future, and businesses are seeing the transformation since their introduction. There's a massive opportunity across industries, and with the Tesseract tool, we are proud to enjoy the first mover advantage of this service,' said Arron Goodin, Managing Director, AdLift Inc. 'As an agency, we are committed towards innovations, helping our clients and building a competitive edge with enhanced efficiency and deeper industry insights.' Arnab Mitra, Founder & Managing Director of Liqvd Asia, commented, 'At Liqvd Asia, innovation is our core. With Tesseract, we're not just responding to the AI revolution—we're shaping it.' 'This product reflects our commitment to empowering brands with cutting-edge solutions that anticipate the future of digital marketing. We believe Tesseract will be a game-changer, enabling brands to thrive in an AI-first world where visibility means everything.' By launching Tesseract, AdLift reaffirms its commitment to pushing the boundaries of digital innovation. ALSO READ: