Latest news with #incidentResponse


Forbes
09-07-2025
- Business
- Forbes
Shadow AI Vs. Safe AI: What Security Leaders Must Know
Tannu Jiwnani is a cybersecurity leader focused on incident response, IAM and threat detection, with a passion for resilience and community. Artificial intelligence (AI) is changing business operations by streamlining tasks, automating decisions and generating insights quickly. However, this increase in AI usage has led to a distinction between officially sanctioned, well-governed AI tools (safe AI) and unsanctioned, unmonitored use (shadow AI). Security leaders must address this growing difference. With many employees having access to powerful AI tools through web browsers, understanding the risks and responsibilities associated with AI adoption is important. Here, I'll discuss what shadow AI is, its differences from safe AI and the role of security teams in maintaining enterprise security, compliance and trust during AI adoption. Defining The Landscape: Shadow AI Vs. Safe AI Shadow AI involves using AI tools like public LLMs, generative image tools or custom machine learning models without IT, security or compliance approval. Examples include marketing using ChatGPT for content, developers using GitHub Copilot or analysts uploading customer data to external AI tools. These actions, though well-meaning, can pose significant risks. Safe AI refers to vetted AI tools managed by security and compliance teams with proper access control, logging and policies. This includes enterprise LLMs with privacy controls (e.g., Azure OpenAI or ChatGPT Enterprise), internally developed models with MLOps and vendor tools under data processing agreements (DPAs). Why Shadow AI Is Growing The adoption of AI is happening faster than most organizations can control. The reasons behind the surge in shadow AI include: • Speed And Convenience: Employees can access AI tools online instantly, often with no installation required. • Productivity Pressure: Teams are rewarded for speed, not compliance. AI can help them hit KPIs faster. • Lack Of Awareness: Many employees do not realize that using external AI tools can violate security or compliance policies. • Lagging Governance: Organizations often struggle to update policies fast enough to keep up with AI innovation. The Risks Of Shadow AI The use of unauthorized or unmanaged AI tools poses significant risks to organizations. One major concern is data leakage: When employees upload sensitive, proprietary or regulated information to public AI models, they may inadvertently expose critical data. Even when providers claim not to store inputs, there is rarely an enterprise-grade guarantee of data privacy. There is also a substantial intellectual property risk. Many AI tools can retain or learn from the data they process, meaning that sharing source code, business strategies or confidential workflows can potentially compromise trade secrets. Compliance is another area of vulnerability. AI tools may process data in ways that violate regulations such as GDPR, HIPAA or industry-specific standards. Lack of logging and audit trails makes compliance nearly impossible. The quality and integrity of output from shadow AI tools are also unreliable. These tools may rely on outdated or biased training data, leading to inconsistent, unvetted results that can skew decision-making, introduce bias and damage brand reputation. Finally, shadow AI creates gaps in incident response. Because its use typically falls outside sanctioned IT and security systems, activity is often not logged or monitored. This makes it difficult for security teams to detect, investigate or respond to data misuse or breaches in a timely manner. What Safe AI Looks Like Safe AI tools are secure by design, incorporating core protections such as encryption in transit and at rest, strict access controls, audit logging and clear data retention policies to prevent unauthorized storage. They also prioritize privacy, offering features like data residency guarantees, the ability to opt out of model training, anonymization or redaction of sensitive inputs and fine-tuning on sanitized datasets. Effective AI use is governed by clear organizational policies that outline approved tools and platforms, define acceptable use cases, establish data handling requirements and specify escalation procedures for violations. Safe AI is also integrated into the broader risk management framework. This means it is factored into threat models, tabletop exercises, third-party risk assessments and routine audits to ensure ongoing oversight. Building A Shadow AI Response Strategy Given the inevitability of shadow AI, security teams must take a proactive stance: 1. Discovering Shadow AI Use: Leverage browser telemetry, data loss prevention (DLP) and cloud access security broker (CASB) tools to detect use of public AI tools. Conduct employee surveys and interviews to understand where and why AI is being used. 2. Educating And Enabling: Train employees on the risks of shadow AI. Provide safe, approved alternatives. Encourage a culture of responsible experimentation. 3. Building Governance Into Access: Embed AI usage policies into onboarding. Use just-in-time access or AI usage approvals for sensitive roles. 4. Involving Legal And Compliance: Create workflows to assess new AI tools quickly. Keep a system of record of all approved tools. 5. Updating Incident Response Playbooks: Add AI-specific incident types (e.g., prompt leakage, model misuse). Train incident response teams to detect, triage and respond to AI-related incidents. The Future Of AI Governance As AI's capabilities evolve, our frameworks for using it safely must adapt as well. Security leaders who are prepared for the future will foresee AI integration throughout all departments, establish cross-functional AI risk committees, insist on vendor transparency regarding model behavior and clarify responsibilities between users, builders and security teams. The ultimate goal isn't to block AI; it's to enable its safe and consistent use, ensuring that it aligns with the organization's values and responsibilities. Turning The Tide The emergence of shadow AI is a clear warning. It demonstrates that AI is useful, sought-after and already deeply embedded in daily work practices. It also signals that security policies and practices need urgent updates. By investing in safe AI strategies, security leaders can transform AI from a hidden risk into a trusted resource. The future of enterprise AI is not about control or restriction; it is about trust, transparency and transformation. Security teams must take the initiative and lead this change. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Daily Mail
15-06-2025
- Automotive
- Daily Mail
Drivers warned major road rule change in Australia with $961 fines 'just weeks' away: What you need to know
A major rule change is just weeks away and Aussie drivers have been warned they could be hit with fines as high as $961 if they break it. From July 1, divers in Victoria must not exceed more than 40km/h when they drive past an incident response vehicle. The rule was already in place for emergency services like police and ambulance on the side of the road and now it will include tow trucks, mechanics and roadside assistance vehicles. Royal Automobile Club of Victoria general manager Makarla Cole told Yahoo News the rule would give more emergency workers protection on the side of the road. The standard penalty for exceeding the 40km/h speed limit near incident response vehicles is $346 but it can be as high as $961 with no demerit points docked. The new rules have been put in place due to safety concerns from roadside workers A survey by RACV revealed 83 per cent of roadside workers experienced a close call with another vehicle at least once a week. Patroller Johnny Dipietro said he had experienced a number of near misses on the side of the road. 'I had a vehicle that almost hit me and I'll tell you what, it was really scary,' he said. Incident responder Steven Bevens said close calls happened 'every day' when on the shoulder of a busy road or highway. Victorian Automotive Chamber of Commerce Peter Jones said the new rules were necessary. 'We're pleased to see the Victorian government's commitment to roadside worker safety becoming a reality,' he said. 'When you see those flashing lights – whether it's police, ambulance, or now our towing and roadside assistance vehicles – slow down to 40km/h. It's a simple action that could save lives. 'This rule change finally gives them the protection they deserve. We urge all motorists to see this as an investment in everyone's safety.'


Khaleej Times
24-05-2025
- Automotive
- Khaleej Times
Dubai: Parked car catches fire at DXB; no injuries reported
Authorities immediately responded to the incident and put out the fire A parked SUV at Dubai International Aiport Terminal 1 caught fire on Saturday midday. Authorities immediately responded to the incident and put out the fire. The fire was immediately extinguished and no injuries were reported.