
You Or Your Providers Are Using AI—Now What?
The rise of generative and agentic AI has fundamentally changed how enterprises approach risk management, software procurement, operations and security. But many companies still treat AI tools like any other software-as-a-service (SaaS) product, rushing to deploy them without fully understanding what they do—or how they expose the business.
Whether it's licensing a chatbot, deploying an AI-powered analytics platform or integrating large language model (LLM) capabilities into your workflows, when your organization becomes the recipient of AI, you inherit a set of security, privacy and operational risks that are often opaque and poorly documented. These risks are being actively exploited, particularly by state-sponsored actors targeting sensitive enterprise data through exposed or misused AI interfaces.
Not All AI Is The Same: Know What You're Buying
Procurement teams often treat all AI as a monolith. But there's a world of difference between generative AI (GenAI), which produces original content based on inputs, and agentic AI, which takes autonomous actions based on goals. For example, GenAI might assist a marketing team by drafting a newsletter based on a prompt, while agentic AI could autonomously decide which stakeholder to contact or determine the appropriate remediation action in a security operations center (SOC).
Each type of AI brings its own unique risks. Generative models can leak sensitive data if inputs or outputs are not properly controlled. Agentic systems can be manipulated or misconfigured to take damaging actions, sometimes without oversight.
Before integrating any AI tool, companies need to ask a fundamental question: What data will be accessed, and where could it be exposed? Is this system generating content, or is it taking action on its own? That distinction should guide every aspect of your risk assessment.
Security Starts With Understanding
Security professionals are trained to ask, 'What is this system doing? What data does it touch? Who can interact with it?' Yet, when it comes to AI, we often accept a black box.
Every AI-enabled application your company uses should be inventoried. You need to know:
• What kind of AI is being used (e.g., generative AI or agentic)?
• What data was used to develop the underlying model, and what controls are in place to ensure accuracy?
• Where is the model hosted (e.g., on-premise, vendor-controlled or the cloud)?
• What data is being ingested?
• What guardrails are in place to prevent abuse, leakage or hallucination?
NIST's AI Risk Management Framework and SANS' recent guidance offer excellent starting points for implementing the right security controls. But at a baseline, companies must treat AI like any other sensitive system, with controls for access, monitoring, auditing and incident response.
Why AI Is A Data Loss Prevention (DLP) Risk
One of the most underappreciated security angles of AI is its role in data leakage. Tools like ChatGPT, GitHub Copilot and countless analytics platforms are hungry for data. Employees often don't realize that entering sensitive information into them can result in it being retained, reprocessed or even exposed to others.
Data loss prevention (DLP) is making a comeback, and for good reason. Companies need modern DLP tools that can flag when proprietary code, personally identifiable information (PII) or customer records are being piped into third-party AI models. This isn't just a compliance issue—it's a core security function, particularly when dealing with foreign-developed AI platforms.
China's DeepSeek AI chatbot has raised multiple concerns. South Korean regulators fined DeepSeek's parent company for transferring personal data from South Korean users to China without consent. Microsoft also recently barred its employees from using the platform due to data security risks.
These incidents highlight the broader strategic risks of embedding third-party AI tools into enterprise environments—especially those built outside of established regulatory frameworks.
A Checklist For Responsible AI Adoption
CIOs, CTOs and CISOs need a clear framework for evaluating AI vendors and managing AI internally. Here's a five-part checklist to guide these engagements:
• Is there a data processing agreement in place?
• Who owns the outputs and derivatives of your data?
• What rights does the vendor retain to train their models?
• How will this AI tool be integrated into existing workflows?
• Who owns responsibility for the AI's decisions or outputs?
• Are there human-in-the-loop controls?
• Could the model generate biased, harmful or misleading results?
• Are decisions explainable?
• Have stakeholders from HR and legal teams been consulted?
• Is personal or regulated data entering the model?
• Is the model trained on proprietary or publicly scraped data?
• Are there retention and deletion policies?
• Has the model or its supply chain been tested for adversarial attacks?
• Are prompts and outputs being logged and monitored?
• Can malicious users exploit the model to extract data or alter behavior?
Final Thought: Awareness And Accountability
AI security doesn't start in the SOC. Instead, it should start with awareness across the business. Employees need to understand that an LLM isn't a search engine, and a prompt isn't a safe space. Meanwhile, security teams must expand visibility with tools that monitor AI use, flag suspicious behavior and inventory every AI-enabled app.
You may not have built or hosted the model, but you'll still be accountable when things go wrong, whether it's a data leak or a harmful decision. Don't assume vendors have done the hard work of securing their models. Ask questions. Run tests. Demand oversight.
AI will only grow more powerful and more autonomous. If you don't understand what it's doing today, you certainly won't tomorrow.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
CelHive: The Rising Unicorn Transforming the Future of AI Workforces
Revolutionary AI platform achieves 200,000 users milestone, showcasing unprecedented growth in the booming AI agent technology sector HONG KONG, Aug. 17, 2025 (GLOBE NEWSWIRE) -- CelHive is rapidly emerging as the next potential unicorn in the artificial intelligence space, demonstrating explosive growth that exemplifies the transformative power of AI agent technology. With user numbers already surpassing 200,000, the platform is setting new benchmarks in what industry leaders unanimously call "the year of AI agents."The innovative platform seamlessly integrates nearly 25 cutting-edge large language models, intelligently selecting optimal AI solutions for each task while orchestrating sophisticated tool deployments to deliver exceptional user experiences. Redefining Digital Productivity CelHive represents a quantum leap beyond traditional AI models like ChatGPT, Claude, and DeepSeek. Operating as an intelligent digital workforce, the platform proactively executes complex multi-step processes - from crafting comprehensive reports and building dynamic websites to producing engaging presentations, creating compelling videos, designing travel itineraries, conducting sophisticated investment analyses, and developing educational content. This revolutionary approach is transforming how businesses conceptualise productivity, with CelHive users reporting dramatic efficiency gains across diverse industries. Three Pillars of Competitive Advantage CelHive's meteoric rise stems from three groundbreaking competitive advantages that position it at the forefront of AI innovation: Real-Time Intelligence: While competitors rely on outdated data, CelHive's proprietary internet search technology delivers instantaneous access to the world's most current information, providing users with millisecond-response capabilities that keep them ahead of the curve. Enhanced Accuracy: Through sophisticated cross-modal verification systems, CelHive has virtually eliminated AI hallucinations. Its advanced text-image-table cross-validation and query expansion-retrieval-reranking technologies ensure responses that precisely match user intentions with remarkable reliability. AI Collaboration: CelHive's collaborative features enable real-time editing, instant downloads, and genuine partnership between humans and AI, creating workflows that amplify human creativity and strategic thinking. The Agent Space Revolution CelHive's upcoming Agent Space functionality promises to unleash unlimited creative potential. This groundbreaking feature will empower users to upload materials, train personalised AI agents, and build comprehensive knowledge bases. The platform's innovative pay-per-call marketplace model creates exciting opportunities for users to monetise their AI innovations while contributing to a thriving ecosystem of specialised intelligence. "We're witnessing the birth of the world's first truly collaborative AI economy," explains the company's Founder. "CelHive isn't just a platform - it's an entire universe of possibilities where human creativity meets artificial intelligence." Explosive Market Growth The AI agent market's trajectory toward $200 billion by 2025 represents one of the most significant technological opportunities in modern history. CelHive's remarkable user growth - reaching 200,000 users with accelerating adoption rates - demonstrates the platform's potential to capture substantial market share in this high-growth, high-value sector. Industry analysts are particularly impressed by CelHive's user retention rates and engagement metrics, which suggest strong product-market fit and sustainable growth momentum. Unlimited Imagination Space CelHive's rapid expansion showcases the unlimited possibilities when cutting-edge technology meets visionary execution. The platform's multimodal AI capabilities represent the pinnacle of artificial intelligence development, positioning it perfectly for the investment community's shift from infrastructure to value-creating applications. With each passing month, CelHive continues to expand its capabilities, integrate new technologies, and explore innovative use cases that seemed impossible just years ago. This relentless innovation cycle creates boundless opportunities for growth and market expansion. Pioneering the Future of Work CelHive represents the evolution from AI-as-a-tool to AI-as-a-teammate. By combining human creativity with artificial intelligence capabilities, the platform enables businesses to achieve outcomes previously requiring entire departments. The platform's continued innovation cycle and expanding feature set demonstrate the transformative potential when advanced technology meets practical business needs. CelHive is not just participating in the AI revolution—it's defining how artificial intelligence will reshape work, productivity, and human potential in the digital age. As businesses worldwide embrace AI-powered automation, CelHive stands at the forefront of this transformation, proving that the future of work lies in seamless human-AI collaboration that amplifies capabilities and creates extraordinary value. Media contact Brand Name : CelHive Contact Person: Marketing Team Email: info@ Website: in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
4 hours ago
- Yahoo
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
The White House AI advisor discussed "AI psychosis" on a recent podcast. David Sacks said he doubted the validity of the concept. He compared it to the "moral panic" that surrounded earlier tech leaps, like social media. AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed "AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges. Read the original article on Business Insider Solve the daily Crossword


Forbes
5 hours ago
- Forbes
4 Things Schools Need To Consider When Designing AI Policies
Artificial intelligence has moved from Silicon Valley boardrooms into homes and classrooms across America. A recent Pew Research Center study reveals that 26% of American teenagers now utilize AI tools for schoolwork—twice the number from two years prior. Many schools are rushing to establish AI policies. The result? Some are creating more confusion than clarity by focusing solely on preventing cheating while ignoring broader educational opportunities AI presents. The challenge shouldn't be about whether to allow AI in schools—it should be about how to design policies that strike a balance between academic integrity and practical preparation for an AI-driven future. Here are four essential considerations for effective school AI policies. 1. Address Teacher AI Use, Not Just Student Restrictions The most significant oversight in current AI policies? They focus almost exclusively on what students can't do while completely ignoring teacher usage. This creates confusion and sends mixed messages to students and families. Most policies spend paragraphs outlining student restrictions, but fail to answer basic questions about educator usage: Can teachers use AI to create lesson plans? Are educators allowed to use AI for generating quiz questions or providing initial feedback on essays? What disclosure requirements exist when teachers use AI-generated content? When schools prohibit students from using AI while allowing teachers unrestricted access, the message becomes hypocritical. Students notice when their teacher presents an AI-generated quiz while simultaneously forbidding them from using AI for research. Parents wonder why their children face strict restrictions while educators operate without clear guidelines. If students are required to disclose AI usage in assignments, teachers should identify when they've used AI for lesson materials. This consistency builds trust and models responsible AI integration. 2. Include Students in AI Policy Development Most AI policies are written by administrators who haven't used ChatGPT for homework or witnessed peer collaboration with AI tools. This top-down approach creates rules that students either ignore or circumvent entirely. When we built AI guidelines for WITY, our AI teen entrepreneurship platform at WIT - Whatever It Takes, we worked directly with students. The result? Policies that teens understand and respect because they helped create them. Students bring critical information about real-world AI use that administrators often miss. They are aware of which platforms their classmates use, how AI supports various subjects, and where current rules create confusion. When students participate in policy creation compliance increases significantly because the rules feel collaborative rather than punitive. 3. Balance AI Guardrails With Innovation Opportunities Many AI policies resemble legal warnings more than educational frameworks. Fear-based language teaches students to view AI as a threat rather than a powerful tool requiring responsible use. Effective policies reframe restrictions as learning opportunities. Instead of "AI cannot write your essays," try "AI can help you brainstorm and organize ideas, but your analysis and voice should drive the final work." Schools that blanket-ban AI usage miss opportunities to prepare students for careers where AI literacy will be essential. AI access can vary dramatically among students. While some students have premium ChatGPT subscriptions and access to the latest tools, others may rely solely on free versions or school-provided resources. Without addressing this gap, AI policies can inadvertently increase educational inequality. 4. Build AI Literacy Into Curriculum and Family Communication In an AI-driven economy, rules alone don't prepare students for a future where AI literacy is necessary. Schools must teach students to think critically about AI outputs, understand the bias in AI systems, and recognize the appropriate applications of AI across different contexts. Parents often feel excluded from AI conversations at school, creating confusion about expectations. This is why schools should explain their AI policies in plain language, provide examples of responsible use, and offer resources for parents who want to support responsible AI use at home. When families understand the educational rationale behind AI integration—including teacher usage and transparency requirements—they become partners in developing responsible use habits rather than obstacles to overcome. AI technology changes rapidly, making static policies obsolete within months. Schools should schedule annual policy reviews that include feedback from students, teachers, and parents about both student and teacher AI usage. AI Policy Assessment Checklist School leaders should evaluate their current policies against these seven criteria: Teacher Guidelines: Do policies clearly state when and how teachers can use AI? Are disclosure requirements consistent between students and educators? Student Input: Have students participated in creating these policies? Do rules reflect actual AI usage patterns among teens? Equity Access: Can all students access the same AI tools, or do policies create advantages for families with premium subscriptions? Family Communication: Can parents easily understand the policies? Are expectations clear for home use? Are there opportunities for workshops for parents? Innovation Balance: Do policies encourage responsible experimentation or only focus on restrictions? Is the school policy focusing on preparing students for the AI-driven workforce? Regular Updates: Is there a scheduled review process as AI technology evolves? Does the school welcome feedback from students, teachers and parents? Skills Development: Do policies include plans for teaching AI literacy alongside restrictions? Who is teaching this class or workshop? Moving Forward: AI Leadership The most effective approach treats students as partners, not adversaries. When teens help create the rules they'll follow, when teachers model responsible usage, and when families understand the educational reasoning behind policies, AI becomes a learning tool rather than a source of conflict. Schools that embrace this collaborative approach will produce graduates who understand how to use AI ethically and effectively—exactly the capabilities tomorrow's economy demands.