logo
#

Latest news with #VASSIntelygenz

Raising The Success Rate Of AI Deployment Across Industries
Raising The Success Rate Of AI Deployment Across Industries

Forbes

time19 hours ago

  • Business
  • Forbes

Raising The Success Rate Of AI Deployment Across Industries

Chris Brown, President at VASS Intelygenz, drives AI and deep tech innovation and implementation across industries, delivering tangible ROI. The AI gold rush has produced countless proofs of concept yet far fewer production victories. McKinsey's 2025 report on AI reveals that almost all companies are investing in it, but just 1% believe they are at maturity. That gap between ambition and reality explains why boardrooms are now asking a harder question: Where is the return? To close the gap, executives must focus on business value, disciplined engineering practices and organizational readiness. Here's how this can be achieved. A 2024 Harvard Business Review survey of 750 executives revealed that while 65% believe they have an advanced understanding of AI's benefits, only 6% reported a cutting-edge ability to derive value and profit-and-loss impact from the technologies. This disparity underscores the importance of grounding AI initiatives in clear business objectives. Translating that insight into action starts with writing a plain language value hypothesis before any code is written. State the business problem, the workflow to improve the key performance indicator and the budgeted payback window. When teams can recite that hypothesis, they build models that matter—not models that merely impress. Ambitious visions are inspiring, yet the first deployment should solve a narrow pain point that owns clean data and clear success criteria. Automating support ticket triage beats launching a customer-facing chatbot because the input format is stable and the cost saving is measurable. Early wins earn organizational trust and generate the training data funding and political capital required for bolder moves later. AI solutions live at the intersection of data science, software engineering and domain expertise. McKinsey reports that AI pilots fail to scale for many reasons, but the most common culprits are poorly designed or executed strategies. Create an AI team that pairs data scientists with platform engineers, product owners and frontline operators from day one. When these roles share responsibility, they're forced to hash out the inevitable compromises—such as how fast the model must respond (latency), how to keep data and predictions secure (security) and how transparent the model needs to be for regulators and users to understand its decisions (explainability)—well before the system goes live. Settling those trade-offs early prevents nasty surprises later, like discovering the model is too slow for a real-time workflow or fails a compliance review after launch. Moving from the lab to production is not a simple handoff—it is a lifecycle. Adopting machine learning operations (MLOps) practices such as automated data validation, model versioning and continuous performance monitoring is a first step. Logging every prediction along with its real-world outcome allows teams to detect model drift—when accuracy declines over time—and retrain before performance suffers. Tools such as infrastructure as code, which standardizes and automates environment setup, and containers, which package software to run reliably across systems, make it easier to roll back changes safely if an AI update introduces issues. In AI, the goal isn't perfection; it's building systems that can be consistently repeated and improved over time. Before deploying AI models into full production, organizations should introduce a shadow testing phase. In this stage, models operate in "technical production," processing live data and generating predictions, but their outputs remain isolated from actual business decisions. This controlled environment enables teams to observe how models perform under real-world conditions without exposing customers or operations to risk. Shadow testing helps build confidence in model reliability and highlights gaps that may not surface during lab testing. It allows teams to refine outputs, uncover edge cases and validate performance metrics in parallel with current workflows. As trust in the model grows, organizations can move from passive observation to selective activation, making shadow testing a strategic bridge between development and deployment. No model stands alone. Map the user journey to decide where AI will automate, where it will augment and where it will advise. Provide confidence scores, user guidance and clear escalation paths so employees know when to trust the machine and when to take control. Effective change management should include role-based training and updated incentive structures that reward human judgment enhanced by AI rather than be replaced by it. Define a small set of leading and lagging indicators, and then track them weekly. Leading indicators might include inference latency (response time) or the percentage of tickets auto-routed. Lagging indicators capture business impact such as customer satisfaction or operating margin. Publish results in a shared dashboard to sustain executive sponsorship and to signal that the AI program is a growth driver. AI will not slow down, but neither will scrutiny. Companies that raise their deployment success rate treat AI as a business discipline, not a science experiment. Follow that playbook, and the next McKinsey survey could show your organization in the 30% club that is already converting models into meaningful growth. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store