Latest news with #PradeepKumarMuthukamatchi


Forbes
14 hours ago
- Business
- Forbes
AI's Momentum: How Startups Are Redefining Enterprise Adoption
Pradeep Kumar Muthukamatchi is a Principal Cloud Solution Architect at Microsoft and a passionate advisor to numerous startups. For startups, in particular, AI has transcended innovation budgets to become a fundamental operational imperative, driving tangible value and reshaping entire industries. I work daily with agile startups, and what I'm seeing on the ground confirms a significant shift: AI is being applied with aggressive intent and remarkable results. To give founders a more nuanced look at what's top of mind for enterprise buyers today, we'll cast light on just how boldly the AI revolution is reshaping startup strategy—and the surprises don't stop there. AI Budgets Are Exploding: From Experiment To Essential Remember when AI was primarily funded through 'innovation budgets'—discretionary pools for speculative projects? Those days are rapidly fading. We're now witnessing AI spend move directly into core operational IT line items. According to PwC's May 2025 AI agent survey, nearly 88% of responding executives indicated their companies plan to increase AI-related budgets during the year because of agentic AI. This surge reflects a newfound confidence in AI's ROI. Startups, with their inherent need for efficiency and rapid scalability, are leading this charge. They're investing heavily in AI solutions to automate, optimize and differentiate. Fine-Tuning Takes A Back Seat: The Power Of Prompt Engineering One of the most surprising yet impactful shifts is the decreasing criticality of extensive fine-tuning. Newer, more intelligent models with longer context windows are capable of delivering similar or even better results through advanced prompt engineering alone. This is a game changer for startups. Fine-tuning models can be a resource-intensive and time-consuming process, requiring significant data preparation and computational power. By relying on sophisticated prompt engineering, startups can achieve strong model performance with far less effort and cost. This agility also reduces model lock-in, allowing for greater portability across different models and providers. Synthesia—a London-based startup that allows users to produce AI-generated videos with digital avatars, eliminating the need for cameras, actors or editing software—uses prompt engineering to direct its AI in script generation, avatar selection and adjusting tone and delivery rather than retraining models for each scenario. According to TechCrunch, Synthesia's prompt-first strategy has attracted more than 60,000 customers, including major enterprises, and helped the company raise $330 million in funding, reaching a valuation of over $2 billion. Multimodel Is The New Norm: Strategic Selection For Optimal Performance The 'one-size-fits-all' approach to AI models is quickly becoming obsolete. Enterprises, particularly agile startups, are now embracing a multimodel strategy, often deploying five or more different models in production for various use cases. This is a strategic necessity driven by a clear goal: optimizing both cost and performance. Different AI models inherently excel at different tasks. For instance, some models are designed for robust, in-depth, complex question-answering systems and intricate data analysis, offering a broad understanding and deep contextual comprehension. Other models are better suited for creative brainstorming, content generation or rapid ideation thanks to their conversational fluency and ability to produce diverse outputs. Harvey, a legal tech startup, integrates multiple AI models across its platform. One model specializes in summarizing dense legal documents, another in drafting client communications and yet another in extracting structured data from scanned contracts. This multimodal strategy allows Harvey to serve roughly 400 law firms globally, including one-third of the top 100 U.S. firms. Rising Stakes: AI's Deeper Integration Making Switching More Complex The 'easy come, easy go' era for AI models is fading. While simple AI tasks once allowed for seamless model interchange, the rise of agentic workflows is changing the game. These sophisticated, multistep AI processes, such as an AI autonomously drafting an email, researching a recipient and scheduling a follow-up, demand deep integration. Many companies are now investing heavily in custom guardrails and precision prompting for these intricate workflows. This commitment creates significant switching costs. I worked with one leader who noted that extensive prompts are meticulously tuned for specific models, often spanning 'lots of pages of instruction.' Changing a model now means a massive reengineering effort, as even a small tweak can disrupt complex, interdependent workflows. The difficulty of ensuring reliable results with a new model makes businesses, especially startups, far more reluctant to switch, effectively anchoring them to their chosen AI partners. The Shift From 'Build' To 'Buy': Accelerating Innovation The AI application ecosystem has matured at an unprecedented pace, leading to a significant shift from 'build' to 'buy' strategies within enterprises—especially among startups. Off-the-shelf, AI-native applications are often outperforming internal builds, proving more efficient, reliable and easier to maintain. Many startups, with their focus on rapid iteration and market fit, are keenly aware that their core competency might not be in building foundational AI infrastructure from scratch. Instead, they can leverage purpose-built AI apps to innovate faster, leading to better outcomes, happier users and a stronger ROI. Instead of building a fraud detection system from scratch, SentiLink—a fintech startup tackling synthetic identity fraud—developed a hybrid approach using prebuilt AI models enhanced with human insight. Its system flags suspicious applications by analyzing patterns across Social Security numbers, addresses and behavioral data. This buy-plus-optimize strategy enabled SentiLink to scale rapidly, serving over 300 financial institutions—including major fintechs and seven of the top 15 U.S. banks. The Bottom Line: Real Impact, Real Opportunity Enterprise AI is moving at lightning speed. What we're observing in the startup ecosystem is not merely experimental investment but a profound integration of AI into core business functions. Real budgets, real traction, a real focus on value and real, powerful tools are driving this transformation. This creates an enormous opportunity for startups to create significant impact, reshape industries and deliver unprecedented value to their customers. The lessons learned from these agile pioneers—strategic model selection, the power of prompt engineering and the embrace of a robust AI app ecosystem—are invaluable for any enterprise looking to harness the true potential of AI. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
15-07-2025
- Business
- Forbes
Unlock Your AI's Superpowers: The Prompting Secrets Of Top Startups
Pradeep Kumar Muthukamatchi is a Principal Cloud Solution Architect at Microsoft and a passionate advisor to numerous startups. getty I've had a front-row seat, guiding countless startups as they harness the immense power of cloud and AI. Every day, I witness startups achieving remarkable feats with AI. But here's a secret: The most successful ones aren't just using AI; they're talking to it better. The magic often boils down to something called "prompt engineering"—basically, how you ask the AI to do things. Based on recent insights and my observations from leading accelerators about top-performing AI startups, I can say that the quality of your AI's output is closely tied to the quality of your input. It's time to move beyond simple questions and adopt a more deliberate, architectural approach to prompting. "Think of your initial prompt as a detailed job description for a new, brilliant, but inexperienced hire," I often tell founders. "The more clarity, context and constraint you provide, the better the performance you'll receive." This philosophy is a far cry from the one-line queries many users start with. So, how can you get your AI to work like a star performer? Let's dive in. Be The Boss: Give Your AI A Role And Super-Clear Instructions The most effective AI startups treat their LLMs like new, highly capable team members needing explicit direction. This means crafting prompts that are incredibly detailed and specific. A well-known startup that designed an AI customer support agent uses prompts spanning multiple pages—essentially, comprehensive operational playbooks. This isn't verbosity for its own sake; it's precision engineering. Start by assigning a clear persona: "You are an expert [Role] specializing in [Domain]." For instance, "You are a seasoned financial analyst tasked with identifying early-stage investment risks," or "You are a compassionate customer success agent for a SaaS B2B product, focused on de-escalation and rapid problem resolution." This immediately sets the tone, style and knowledge domain the LLM should adopt. Deconstructing Complexity: Structured Tasks And Step-By-Step Guidance Just as a project manager outlines a complex initiative, your prompt should clearly state the LLM's primary objective and then break down intricate tasks into manageable steps. If you want the LLM to draft a marketing email, analyze customer feedback for sentiment and then suggest product improvements, consider making these distinct, ordered steps within your prompt. Structure isn't just for the instructions; it's also for the desired output. Using Markdown, bullet points, XML tags or even custom delimiters (e.g., , ) helps the LLM understand how to format its response. This structured output is crucial for downstream processing and consistency. For example, a top startup reportedly uses tags like to enforce specific response elements, ensuring that critical checks are performed. Iterative Refinement And Learning: Examples And Meta-Prompting One of the most powerful yet underutilized techniques is "show, don't just tell." For complex or nuanced tasks, embedding a few high-quality examples of input-output pairs directly into your prompt can dramatically improve the LLM's ability to understand and replicate the desired behavior. Furthermore, don't forget that LLMs can help you improve your prompts. This is known as meta-prompting. You can provide your current prompt to an LLM and ask: "Critique this prompt. How can I make it clearer, more effective or less ambiguous for an AI assistant?" The suggestions can often be surprisingly insightful. Meta-prompting is a direct application of this principle to the workflow of prompt engineering itself. Building Robust And Adaptive Systems Even with the best prompts, ambiguity can arise if the LLM encounters unforeseen scenarios or lacks specific information. It's crucial to build in "escape hatches." Instruct the LLM to state "I don't know" or "Insufficient information" when unsure and specify what information is needed. This approach reduces the likelihood of "hallucinations"—confident but incorrect answers—and increases user trust. For sophisticated multi-step workflows, consider dynamic prompt generation. An initial prompt might classify a user's request or event, with the output generating a more specialized sub-prompt. For example, an initial prompt identifies a software bug report as a "frontend UI error," which then triggers a specific follow-up prompt: "You are a senior frontend developer. Analyze the following UI error description and provide a triage report, including potential causes, affected components and suggested debugging steps: [error details]." This adaptive approach ensures relevant context is applied. The Unsung Heroes: Debugging Traces And Rigorous Evaluation To understand and improve your LLM's performance, ask it to show its work. Include instructions like, "In a section titled 'Reasoning_Process,' explain the step-by-step logic used to arrive at your answer." While prompt engineering is crucial, the evaluation framework (your "evals") is arguably your most critical piece of intellectual property. Develop robust evaluation metrics and processes to measure prompt effectiveness and improvement. This systematic evaluation approach distinguishes startups that merely use AI from those that truly innovate with it. Additionally, different LLMs, and even different versions of the same model, can have distinct "personalities" and capabilities. Some startups use powerful models for initial high-quality prompt generation or complex reasoning tasks, then distill the output for use with a faster, more cost-effective model in production. This optimization strategy balances capability with operational efficiency. Prompting As A Core Competency In the rapidly evolving landscape of artificial intelligence, the ability to communicate effectively with LLMs is fast becoming a core competency. The strategies employed by leading AI startups—hyper-specificity, role assignment, structured guidance, iterative refinement, built-in safeguards and rigorous evaluation—are not mere tricks; they represent a fundamental shift in how we interact with and build upon these powerful technologies. It's a continuous learning process, one that requires experimentation, precision and a deep understanding of both the AI's capabilities and its limitations. By adopting these advanced prompting techniques, your organization can unlock new levels of innovation and efficiency, truly harnessing AI's transformative potential. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?