logo
Shadow AI Vs. Safe AI: What Security Leaders Must Know

Shadow AI Vs. Safe AI: What Security Leaders Must Know

Forbes09-07-2025
Tannu Jiwnani is a cybersecurity leader focused on incident response, IAM and threat detection, with a passion for resilience and community.
Artificial intelligence (AI) is changing business operations by streamlining tasks, automating decisions and generating insights quickly. However, this increase in AI usage has led to a distinction between officially sanctioned, well-governed AI tools (safe AI) and unsanctioned, unmonitored use (shadow AI). Security leaders must address this growing difference.
With many employees having access to powerful AI tools through web browsers, understanding the risks and responsibilities associated with AI adoption is important. Here, I'll discuss what shadow AI is, its differences from safe AI and the role of security teams in maintaining enterprise security, compliance and trust during AI adoption.
Defining The Landscape: Shadow AI Vs. Safe AI
Shadow AI involves using AI tools like public LLMs, generative image tools or custom machine learning models without IT, security or compliance approval. Examples include marketing using ChatGPT for content, developers using GitHub Copilot or analysts uploading customer data to external AI tools. These actions, though well-meaning, can pose significant risks.
Safe AI refers to vetted AI tools managed by security and compliance teams with proper access control, logging and policies. This includes enterprise LLMs with privacy controls (e.g., Azure OpenAI or ChatGPT Enterprise), internally developed models with MLOps and vendor tools under data processing agreements (DPAs).
Why Shadow AI Is Growing
The adoption of AI is happening faster than most organizations can control. The reasons behind the surge in shadow AI include:
• Speed And Convenience: Employees can access AI tools online instantly, often with no installation required.
• Productivity Pressure: Teams are rewarded for speed, not compliance. AI can help them hit KPIs faster.
• Lack Of Awareness: Many employees do not realize that using external AI tools can violate security or compliance policies.
• Lagging Governance: Organizations often struggle to update policies fast enough to keep up with AI innovation.
The Risks Of Shadow AI
The use of unauthorized or unmanaged AI tools poses significant risks to organizations. One major concern is data leakage: When employees upload sensitive, proprietary or regulated information to public AI models, they may inadvertently expose critical data. Even when providers claim not to store inputs, there is rarely an enterprise-grade guarantee of data privacy.
There is also a substantial intellectual property risk. Many AI tools can retain or learn from the data they process, meaning that sharing source code, business strategies or confidential workflows can potentially compromise trade secrets.
Compliance is another area of vulnerability. AI tools may process data in ways that violate regulations such as GDPR, HIPAA or industry-specific standards. Lack of logging and audit trails makes compliance nearly impossible.
The quality and integrity of output from shadow AI tools are also unreliable. These tools may rely on outdated or biased training data, leading to inconsistent, unvetted results that can skew decision-making, introduce bias and damage brand reputation.
Finally, shadow AI creates gaps in incident response. Because its use typically falls outside sanctioned IT and security systems, activity is often not logged or monitored. This makes it difficult for security teams to detect, investigate or respond to data misuse or breaches in a timely manner.
What Safe AI Looks Like
Safe AI tools are secure by design, incorporating core protections such as encryption in transit and at rest, strict access controls, audit logging and clear data retention policies to prevent unauthorized storage. They also prioritize privacy, offering features like data residency guarantees, the ability to opt out of model training, anonymization or redaction of sensitive inputs and fine-tuning on sanitized datasets.
Effective AI use is governed by clear organizational policies that outline approved tools and platforms, define acceptable use cases, establish data handling requirements and specify escalation procedures for violations. Safe AI is also integrated into the broader risk management framework. This means it is factored into threat models, tabletop exercises, third-party risk assessments and routine audits to ensure ongoing oversight.
Building A Shadow AI Response Strategy
Given the inevitability of shadow AI, security teams must take a proactive stance:
1. Discovering Shadow AI Use: Leverage browser telemetry, data loss prevention (DLP) and cloud access security broker (CASB) tools to detect use of public AI tools. Conduct employee surveys and interviews to understand where and why AI is being used.
2. Educating And Enabling: Train employees on the risks of shadow AI. Provide safe, approved alternatives. Encourage a culture of responsible experimentation.
3. Building Governance Into Access: Embed AI usage policies into onboarding. Use just-in-time access or AI usage approvals for sensitive roles.
4. Involving Legal And Compliance: Create workflows to assess new AI tools quickly. Keep a system of record of all approved tools.
5. Updating Incident Response Playbooks: Add AI-specific incident types (e.g., prompt leakage, model misuse). Train incident response teams to detect, triage and respond to AI-related incidents.
The Future Of AI Governance
As AI's capabilities evolve, our frameworks for using it safely must adapt as well. Security leaders who are prepared for the future will foresee AI integration throughout all departments, establish cross-functional AI risk committees, insist on vendor transparency regarding model behavior and clarify responsibilities between users, builders and security teams.
The ultimate goal isn't to block AI; it's to enable its safe and consistent use, ensuring that it aligns with the organization's values and responsibilities.
Turning The Tide
The emergence of shadow AI is a clear warning. It demonstrates that AI is useful, sought-after and already deeply embedded in daily work practices. It also signals that security policies and practices need urgent updates.
By investing in safe AI strategies, security leaders can transform AI from a hidden risk into a trusted resource. The future of enterprise AI is not about control or restriction; it is about trust, transparency and transformation. Security teams must take the initiative and lead this change.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...
HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...

Yahoo

timean hour ago

  • Yahoo

HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...

Release Date: August 14, 2025 For the complete transcript of the earnings call, please refer to the full earnings call transcript. Positive Points HIVE Digital Technologies Ltd (NASDAQ:HIVE) reported a record quarter with over $45 million in total revenue, primarily driven by Bitcoin mining operations. The company achieved a significant growth in earnings per share, increasing by 206% year over year. HIVE's strategic expansion in Paraguay has been transformative, allowing the company to rapidly scale its Bitcoin mining operations. The company maintains a strong balance sheet with $24.6 million in cash and $47.3 million in digital currencies. HIVE's focus on renewable energy and sustainable practices positions it well for future growth, particularly in the AI and HPC sectors. Negative Points The volatility of Bitcoin prices poses a risk to HIVE's financial performance, as evidenced by the significant non-cash reevaluation of Bitcoin on their balance sheet. High depreciation charges due to the purchase of new GPU and ASIC chips for AI and Bitcoin buildout could impact profitability. The company's expansion and scaling efforts require significant capital investment, which could strain financial resources if not managed carefully. HIVE's growth strategy involves complex operations across multiple countries, which may present logistical and regulatory challenges. The competitive landscape in the Bitcoin mining and AI sectors is intensifying, which could pressure HIVE's market position and margins. Q & A Highlights Warning! GuruFocus has detected 7 Warning Signs with HIVE. Q: Can you provide an overview of HIVE's financial performance for Q1 2026? A: Aiden Killick, President and CEO, highlighted that HIVE had a record quarter with over $45 million in total revenue, 90% of which came from Bitcoin mining operations and 10% from their HPC AI business. The company achieved a gross operating margin of 38%, yielding about $15.8 million in cash flow from operations, and reported a net income of $35 million with $44.6 million in adjusted EBITDA. Q: How is HIVE managing its Bitcoin holdings and what strategies are in place for future growth? A: Aiden Killick explained that HIVE ended the quarter with 435 Bitcoin on the balance sheet and has a Bitcoin pledge strategy allowing them to purchase Bitcoin back at zero interest. This strategy has enabled HIVE to scale its Bitcoin mining business without dilution or taking on debt, effectively using $200 million worth of CapEx. Q: What are the key developments in HIVE's expansion efforts, particularly in Paraguay? A: Aiden Killick noted that HIVE has significantly expanded its operations in Paraguay, completing phase one of their expansion ahead of schedule. They are currently operating at over 15 exahash and are fully funded to reach 25 exahash by American Thanksgiving. This expansion is part of their strategy to maintain a 440 megawatt green energy footprint for Bitcoin mining. Q: How does HIVE's AI and HPC business contribute to its overall strategy? A: Craig Tavares, President of Buzz HPC, explained that HIVE's AI and HPC business is rapidly scaling, with a target of reaching $100 million ARR. The company operates over 5,000 GPUs and is focused on providing a full suite of infrastructure services for AI, leveraging their existing data centers and renewable energy sources. Q: What are HIVE's future plans for data center expansion and AI infrastructure? A: Craig Tavares mentioned that HIVE is expanding its data center footprint with recent acquisitions in Toronto and Sweden. These facilities will support their sovereign AI strategy and are expected to go live next year. The Toronto data center, in particular, will be a tier 3 facility leveraging liquid cooling infrastructure to support high-density GPU clusters. For the complete transcript of the earnings call, please refer to the full earnings call transcript. This article first appeared on GuruFocus.

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

Yahoo

time2 hours ago

  • Yahoo

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

The White House AI advisor discussed "AI psychosis" on a recent podcast. David Sacks said he doubted the validity of the concept. He compared it to the "moral panic" that surrounded earlier tech leaps, like social media. AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed "AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges. Read the original article on Business Insider Solve the daily Crossword

4 Things Schools Need To Consider When Designing AI Policies
4 Things Schools Need To Consider When Designing AI Policies

Forbes

time2 hours ago

  • Forbes

4 Things Schools Need To Consider When Designing AI Policies

Artificial intelligence has moved from Silicon Valley boardrooms into homes and classrooms across America. A recent Pew Research Center study reveals that 26% of American teenagers now utilize AI tools for schoolwork—twice the number from two years prior. Many schools are rushing to establish AI policies. The result? Some are creating more confusion than clarity by focusing solely on preventing cheating while ignoring broader educational opportunities AI presents. The challenge shouldn't be about whether to allow AI in schools—it should be about how to design policies that strike a balance between academic integrity and practical preparation for an AI-driven future. Here are four essential considerations for effective school AI policies. 1. Address Teacher AI Use, Not Just Student Restrictions The most significant oversight in current AI policies? They focus almost exclusively on what students can't do while completely ignoring teacher usage. This creates confusion and sends mixed messages to students and families. Most policies spend paragraphs outlining student restrictions, but fail to answer basic questions about educator usage: Can teachers use AI to create lesson plans? Are educators allowed to use AI for generating quiz questions or providing initial feedback on essays? What disclosure requirements exist when teachers use AI-generated content? When schools prohibit students from using AI while allowing teachers unrestricted access, the message becomes hypocritical. Students notice when their teacher presents an AI-generated quiz while simultaneously forbidding them from using AI for research. Parents wonder why their children face strict restrictions while educators operate without clear guidelines. If students are required to disclose AI usage in assignments, teachers should identify when they've used AI for lesson materials. This consistency builds trust and models responsible AI integration. 2. Include Students in AI Policy Development Most AI policies are written by administrators who haven't used ChatGPT for homework or witnessed peer collaboration with AI tools. This top-down approach creates rules that students either ignore or circumvent entirely. When we built AI guidelines for WITY, our AI teen entrepreneurship platform at WIT - Whatever It Takes, we worked directly with students. The result? Policies that teens understand and respect because they helped create them. Students bring critical information about real-world AI use that administrators often miss. They are aware of which platforms their classmates use, how AI supports various subjects, and where current rules create confusion. When students participate in policy creation compliance increases significantly because the rules feel collaborative rather than punitive. 3. Balance AI Guardrails With Innovation Opportunities Many AI policies resemble legal warnings more than educational frameworks. Fear-based language teaches students to view AI as a threat rather than a powerful tool requiring responsible use. Effective policies reframe restrictions as learning opportunities. Instead of "AI cannot write your essays," try "AI can help you brainstorm and organize ideas, but your analysis and voice should drive the final work." Schools that blanket-ban AI usage miss opportunities to prepare students for careers where AI literacy will be essential. AI access can vary dramatically among students. While some students have premium ChatGPT subscriptions and access to the latest tools, others may rely solely on free versions or school-provided resources. Without addressing this gap, AI policies can inadvertently increase educational inequality. 4. Build AI Literacy Into Curriculum and Family Communication In an AI-driven economy, rules alone don't prepare students for a future where AI literacy is necessary. Schools must teach students to think critically about AI outputs, understand the bias in AI systems, and recognize the appropriate applications of AI across different contexts. Parents often feel excluded from AI conversations at school, creating confusion about expectations. This is why schools should explain their AI policies in plain language, provide examples of responsible use, and offer resources for parents who want to support responsible AI use at home. When families understand the educational rationale behind AI integration—including teacher usage and transparency requirements—they become partners in developing responsible use habits rather than obstacles to overcome. AI technology changes rapidly, making static policies obsolete within months. Schools should schedule annual policy reviews that include feedback from students, teachers, and parents about both student and teacher AI usage. AI Policy Assessment Checklist School leaders should evaluate their current policies against these seven criteria: Teacher Guidelines: Do policies clearly state when and how teachers can use AI? Are disclosure requirements consistent between students and educators? Student Input: Have students participated in creating these policies? Do rules reflect actual AI usage patterns among teens? Equity Access: Can all students access the same AI tools, or do policies create advantages for families with premium subscriptions? Family Communication: Can parents easily understand the policies? Are expectations clear for home use? Are there opportunities for workshops for parents? Innovation Balance: Do policies encourage responsible experimentation or only focus on restrictions? Is the school policy focusing on preparing students for the AI-driven workforce? Regular Updates: Is there a scheduled review process as AI technology evolves? Does the school welcome feedback from students, teachers and parents? Skills Development: Do policies include plans for teaching AI literacy alongside restrictions? Who is teaching this class or workshop? Moving Forward: AI Leadership The most effective approach treats students as partners, not adversaries. When teens help create the rules they'll follow, when teachers model responsible usage, and when families understand the educational reasoning behind policies, AI becomes a learning tool rather than a source of conflict. Schools that embrace this collaborative approach will produce graduates who understand how to use AI ethically and effectively—exactly the capabilities tomorrow's economy demands.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store