Latest news with #promptengineering

Finextra
a day ago
- Business
- Finextra
Context Engineering for Financial Services: By Steve Wilcockson
The hottest discussion in AI right now, at least the one not about Agentic AI, is about how "context engineering" is more important than prompt engineering, how you give AI the data and information it needs to make decisions, and it cannot (and must not) be a solely technical function. "'Context' is actually how your company operates; the ideal versions of your reports, documents & processes that the AI can use as a model; the tone & voice of your organization. It is a cross-functional problem.' So says renowned Tech Influencer and Associate Professor at Wharton School, Ethan Molick. He in turn cites fellow Tech Influencer Andrej Karpathy on X, who in turn cites Tobi Lutke, CEO of Shopify: "It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM. " The three together - Molick, Karpathy and Lutke - make for a powerful triumvirate of Tech-influencers. Karpathy consolidates the subject nicely. He emphasizes that in real-world, industrial-strength LLM applications, the challenge entails filling the model's context window with just the right mix of information. He thinks about context engineering as both a science—because it involves structured systems and system-level thinking, data pipelines, and optimization —and an art, because it requires intuition about how LLMs interpret and prioritize information. His analysis reflects two of my predictions for 2025 one highlighting the increasing impact of uncertainty and another a growing appreciation of knowledge. Tech mortals offered further useful comments on the threads, two of my favorites being: 'Owning knowledge no longer sets anyone apart; what matters is pattern literacy—the ability to frame a goal, spot exactly what you don't know, and pull in just the right strands of information while an AI loom weaves those strands into coherent solutions.' weaves those strands into coherent solutions.' 'It also feels like 'leadership' Tobi. How to give enough information, goal and then empower.' I love the AI loom analogy, in part because it corresponds with one of my favorite data descriptors, the "Contextual Fabric". I like the leadership positivity too, because the AI looms and contextual fabrics, are led by and empowered by humanity. Here's my spin, to take or leave. Knowledge, based on data, isn't singular, it's contingent, contextual. Knowledge and thus the contextual fabric of data on which it is embedded is ever changing, constantly shifting, dependent on situations and needs. I believe knowledge is shaped by who speaks, who listens, and what about. That is, to a large extent, led by power and the powerful. Whether in Latin, science, religious education, finance and now AI, what counts as 'truth' is often a function of who gets to tell the story. It's not just about what you know, but how, why, and where you know it, and who told you it. But of course it's not that simple; agency matters - the peasant can become an abbot, the council house schoolgirl can become a Nobel prize-winning scientist, a frontier barbarian can become a Roman emperor. For AI, truth to power is held by the big tech firms and grounded on bias, but on the other it's democratizing in that all of us and our experiences help train and ground AI, in theory at least. I digress. For AI-informed decision intelligence, context will likely be the new computation that makes GenAI tooling more useful than simply being an oft-hallucinating stochastic parrot, while enhancing traditional AI - predictive machine learning, for example - to be increasingly relevant and affordable for the enterprise. Context Engineering for FinTech Context engineering—the art of shaping the data, metadata, and relationships that feed AI—may become the most critical discipline in tech. This is like gold for those of us in the FinTech data engineering space, because we're the dudes helping you create your own context. I'll explore how five different contextual approaches, all representing data engineering-relevant vendors I have worked for —technical computing, vector-based, time-series, graph and geospatial platforms—can support context engineering. Parameterizing with Technical Computing Technical computing tools – think R, Julia, MATLAB and Python's SciPy stack - can integrate domain-specific data directly into the model's environment through structured inputs, simulations, and real-time sensor data, normally as vectors, tables or matrices. For example, in engineering or robotics applications, an AI model can be fed with contextual information such as system dynamics, environmental parameters, or control constraints. Thus the model can make decisions that are not just statistically sound but also physically meaningful within the modeled system. They can dynamically update the context window of an AI model, for example in scenarios like predictive maintenance or adaptive control, where AI must continuously adapt to new data. By embedding contextual cues, like historical trends, operational thresholds, or user-defined rules, such tools help ground the model's outputs in the specific realities of the task or domain. Financial Services Use Cases Quantitative Strategy Simulation Simulate trading strategies and feed results into an LLM for interpretation or optimization. Stress Testing Financial Models Run Monte Carlo simulations or scenario analyses and use the outputs to inform LLMs about potential systemic risks. Vectors and the Semantics of Similarity Vector embeddings are closely related to the linear algebra of technical computing, but they bring semantic context to the table. Typically stored in so-called vector databases, they encode meaning into high-dimensional space, allowing AI to retrieve through search not just exact matches, but conceptual neighbors. They thus allow for multiple stochastically arranged answers, not just one. Until recently, vector embeddings and vector databases have been primary providers of enterprise context to LLMs, shoehorning all types of data as searchable mathematical vectors. Their downside is their brute force and compute-intensive approach to storing and searching data. That said, they use similar transfer learning approaches – and deep neural nets – to those that drive LLMs. As expensive, powerful brute force vehicles of Retrieval-Augmented Generation (RAG), vector databases don't simply just store documents but understand them, and have an increasingly proven place for enabling LLMs to ground their outputs in relevant, contextualized knowledge. Financial Services Use Cases Customer Support Automation Retrieve similar past queries, regulatory documents, or product FAQs to inform LLM responses in real-time. Fraud Pattern Matching Embed transaction descriptions and retrieve similar fraud cases to help the model assess risk or flag suspicious behavior. Time-Series, Temporal and Streaming Context Time-series database and analytics providers, and in-memory and columnar databases that can organize their data structures by time, specialize in knowing about the when. They can ensure temporal context—the heartbeat of many use cases in financial markets as well as IoT, and edge computing- grounds AI at the right time with time-denominated sequential accuracy. Streaming systems, like Kafka, Flink, et al can also facilitate the real-time central nervous systems of financial event-based systems. It's not just about having access to time-stamped data, but analyzing it in motion, enabling AI to detect patterns, anomalies, and causality, as close as possible to real time. In context engineering, this is gold. Whether it's fraud that happens in milliseconds or sensor data populating insurance telematics, temporal granularity can be the difference between insight and noise, with context stored and delivered by what some might see as a data timehouse. Financial Services Use Cases Market Anomaly Detection Injecting real-time price, volume, and volatility data into an LLM's context allows it to detect and explain unusual market behavior. High-Frequency Trading Insights Feed LLMs with microsecond-level trade data to analyze execution quality or latency arbitrage. Graphs That Know Who's Who Graph and relationship-focussed providers play a powerful role in context engineering by structuring and surfacing relationships between entities that are otherwise hidden in raw data. In the context of large language models (LLMs), graph platforms can dynamically populate the model's context window with relevant, interconnected knowledge—such as relationships between people, organizations, events, or transactions. They enable the model to reason more effectively, disambiguate entities, and generate responses that are grounded in a rich, structured understanding of the domain. Graphs can act as a contextual memory layer through GraphRAG and Contextual RAG, ensuring that the LLM operates with awareness of the most relevant and trustworthy information. For example, graph databases - or other environments, e.g. Spark, that can store graph data types as accessible files, e.g. Parquet, HDFS - can be used to retrieve a subgraph of relevant nodes and edges based on a user query, which can then be serialized into natural language or structured prompts for the LLM. Platforms that focus graph context around entity resolution and contextual decision intelligence can enrich the model's context with high-confidence, real-world connections—especially useful in domains like fraud detection, anti-money laundering, or customer intelligence. Think of them as like Shakespeare's Comedy of Errors meets Netflix's Department Q. Two Antipholuses and two Dromios rather than 1 of each in Comedy of Errors? Only 1 Jennings brother to investigate in Department Q's case, and where does Kelly MacDonald fit into anything? Entity resolution and graph context can help resolve and connect them in a way that more standard data repositories and analytics tools struggle with. LLMs cannot function without correct and contingent knowledge of people, places, things and the relationships between them, though to be sure many types of AI can also help discover the connections and resolve entities in the first place. Financial Services Use Cases AML and KYC Investigations Surface hidden connections between accounts, transactions, and entities to inform LLMs during risk assessments. Credit Risk Analysis Use relationship graphs to understand borrower affiliations, guarantors, and exposure networks. Seeing the World in Geospatial Layers Geospatial platforms support context engineering by embedding spatial awareness into AI systems, enabling them to reason about location, proximity, movement, and environmental context. They can provide rich, structured data layers (e.g., terrain, infrastructure, demographics, weather) that can be dynamically retrieved and injected into an LLM's context window. This allows the model to generate responses that are not only linguistically coherent but also geographically grounded. For example, in disaster response, a geospatial platform can provide real-time satellite imagery, flood zones, and population density maps. This data can be translated into structured prompts or visual inputs for an AI model tasked with coordinating relief efforts or summarizing risk. Similarly, in urban planning or logistics, geospatial context helps the model understand constraints like traffic patterns, zoning laws, or accessibility. In essence, geospatial platforms act as a spatial memory layer, enriching the model's understanding of the physical world and enabling more accurate, context-aware decision-making. Financial Services Use Cases Branch Network Optimization Combine demographic, economic, and competitor data to help LLMs recommend new branch locations. Climate Risk Assessment Integrate flood zones, wildfire risk, or urban heat maps to evaluate the environmental exposure of mortgage and insurance portfolios. Context Engineering Beyond the Limits of Data, Knowledge & Truths Context engineering I believe recognizes that data is partial, and that knowledge and perhaps truth or truths needs to be situated, connected, and interpreted. Whether through graphs, time-series, vectors, tech computing platforms, or geospatial layering, AI depends on weaving the right contextual strands together. Where AI represents the loom, the five types of platforms I describe are like the spindles, needles, and dyes drawing on their respective contextual fabrics of ever changing data, driving threads of knowledge—contingent, contextual, and ready for action.


Forbes
23-06-2025
- Forbes
6 AI Terms All Content Creators Should Know
Happy young woman in headphones using laptop with double exposure of AI artificial intelligence chat ... More bot. Concept of machine learning and AI assistant If you're a content creator, you may already be using AI or experimenting with how to incorporate it into your current processes. AI can help you brainstorm captions, outline a podcast episode, or clean up your video transcripts. But if you've ever wondered, "Am I using AI the right way?" or felt behind on the latest AI buzzwords, you're not alone. In this next wave of technology, you don't need a technical background to stay relevant and use AI. Becoming an AI-native creator is about being open about how you can get AI to work for you. An AI-Native creator starts with learning the language of AI to help you navigate AI tools with clarity and confidence. Here are six essential AI terms explained to help you become an AI native creator and stay ahead of the curve: Stock image showing a black man's face looking into a computer screen in an open plan working ... More office. Type is being added to the screen by an Artificial intelligence, AI, chatbot. Prompt Engineering Definition: Prompt engineering is the skill of crafting clear, specific instructions for AI tools to generate useful, accurate, and high-quality responses. A prompt is simply an input, a question, a task, or a command. Why it matters: At its core, AI is only as good as the input it receives. A vague or generic prompt often leads to generic results. With practice, you'll learn how to create a well-structured, detailed prompt. The more context you provide, the clearer your directions will lead to better results. For creators, prompt engineering can help you turn bland captions into scroll-stopping hooks. The good news is prompt engineering is not just a technical skill; it's a creative one. While some people might write out their questions into the AI tools, you also have the option to speak directly into the AI tools like Claude or Chat GPT. Prompt engineering allows creators to direct AI like a co-writer, production assistant, or researcher. As AI becomes more integrated into content creation workflows, understanding how to communicate with these tools becomes essential. To improve your writing prompts, consider incorporating details such as tone, voice, or structure that effectively convey the intended content. When content creators will use prompt engineering: Example: Weak prompt: "Write a caption about Peru." Better prompt: 'Write a short Instagram caption about visiting Machu Picchu. Mention that it's one of the Seven Wonders of the World and focus on how it felt to see it in person for the first time. I felt elated finally seeing Machu Picchu for the first time. Keep the tone reflective and under 200 words.' Hallucination Definition: In AI, a hallucination occurs when an AI system confidently provides incorrect or fabricated information. Why it matters: Hallucinations are one of the biggest risks in using generative AI (images, videos, or copy). Whether you're writing an Instagram caption about a historical site, a podcast script about industry trends, or a blog post referencing a public figure, there's always a chance the AI could introduce an error. Example: You ask ChatGPT: "What year was Pike Place Market established?" It responds: "Pike Place Market was established in 1852." That's a hallucination. The correct year is 1907. How to navigate hallucinations as content creators: While you might use AI tools for speed and support, AI tools should not be your final fact-checker. You are still the editor-in-chief of your brand and content. You should still conduct your due diligence and Verify facts or information from official sources. Large Language Model (LLM) Definition: The AI model type that powers tools like ChatGPT, Claude, and Gemini — trained on massive texts like books, websites, articles, and more to understand and generate human-like language. Why it matters: LLMs power tools like ChatGPT, Claude, and Gemini, when you ask for a blog outline, Instagram caption, or video script, the LLM analyzes your prompt and predicts a coherent, context-aware response. These models don't "know" things the way humans do, but they're skilled at producing relevant, high-quality language based on patterns they've learned. The more you guide them with prompts, the better they perform. Fine-Tuning Definition: Custom-training an AI model on your data like captions, scripts, or blogs — so it mimics your tone and style. By fine-tuning an AI tool, you'll hopefully get more content that sounds more like you. Why it matters for content creators: If you consistently write in a recognizable voice, tone, or structure for newsletters, social captions, podcast intros, or branded scripts — fine-tuning helps the AI mirror that voice more precisely. It saves time and protects your brand identity as you scale. For content creators, fine-tuning might involve building a custom Chat GPT and laying out a set of organized examples of the type of content they want the AI tool to mimic. You can copy and paste previous Instagram captions into a spreadsheet and upload them into the AI tools to share your voice and tone for future content. When should you consider fine-tuning? Example: You fine-tune a model on your LinkedIn posts. Now, it consistently generates polished, insightful copy in your voice for the platform. Synthetic Media: Definition: Content (text, audio, video, or images) created or partially generated using AI. Instead of being recorded, photographed, or written by a human in real-time, synthetic media is created using algorithms. Why it matters: Synthetic media is already transforming how creators produce content. If you've used an AI-generated voiceover or created a thumbnail with DALL·E, you've made synthetic media. Example: AI Native Creator Definition: A content creator who integrates AI tools into their workflow, voice, business, and monetization strategies. Why it matters: AI won't replace creators, but creators who understand AI will grow, scale, and develop their brands and businesses faster. Content creators and influencers who adopt AI early on will learn how to co-create with it as a strategic partner. Mastering AI isn't about replacing your creativity or yourself. Content creators and influencers remain the storytellers, community builders, and tastemakers. The more fluently you speak the language of AI, the more opportunities you unlock for your brand and business. Learning and embracing AI tools can help content creators edit, brainstorm, refine, and produce content faster.


Geeky Gadgets
11-06-2025
- Business
- Geeky Gadgets
Master the Art of Prompt Engineering and Unlock AI's Full Potential
What if mastering a single skill could transform the way you interact with AI, unlocking its full potential to solve problems, generate ideas, and streamline tasks? Welcome to the world of prompt engineering, a discipline that's quickly becoming indispensable in the age of artificial intelligence. Whether you're a curious beginner or a seasoned user, crafting the right prompts can mean the difference between mediocre results and new insights. Think of it as learning to ask the perfect question—one that guides AI to deliver exactly what you need, every time. This how-to, brought to you by Matthew Berman, is your roadmap to mastering this critical skill, from foundational principles to advanced techniques. Matthew Berman uncovers the secrets to creating clear, specific, and relevant prompts that drive consistent and high-quality outputs. You'll also explore advanced strategies, like iterative refinement and contextual framing, that can elevate your AI interactions to new heights. Along the way, we'll tackle common challenges, share practical examples, and reveal tips for optimizing prompts across diverse applications—from content creation to data analysis. By the end, you won't just understand prompt engineering—you'll be equipped to use it as a powerful tool to amplify your work and ideas. So, what makes a prompt truly effective? Let's explore the answer together. Mastering Prompt Engineering Understanding Prompt Engineering and Its Significance Prompt engineering involves designing and refining inputs—referred to as 'prompts'—to guide AI models in generating accurate and relevant outputs. The quality of a prompt directly impacts the AI's performance. For example, a well-constructed prompt can enable an AI to summarize complex topics, generate innovative ideas, or solve technical problems with precision. By mastering this skill, you can unlock the full potential of AI systems across diverse applications, such as content creation, data analysis, and customer support. Effective prompt engineering ensures that the AI delivers outputs that align with your objectives, making it an indispensable tool in using AI technology. Core Principles for Crafting Effective Prompts Creating effective prompts requires adherence to three fundamental principles: clarity, specificity, and relevance. These principles form the foundation of successful prompt engineering. Clarity: A clear prompt eliminates ambiguity, making sure the AI understands your request. For instance, instead of saying, 'Explain this,' specify what 'this' refers to and the type of explanation you require. A clear prompt might be, 'Explain the concept of renewable energy in simple terms.' A clear prompt eliminates ambiguity, making sure the AI understands your request. For instance, instead of saying, 'Explain this,' specify what 'this' refers to and the type of explanation you require. A clear prompt might be, 'Explain the concept of renewable energy in simple terms.' Specificity: Narrowing the scope of your request reduces the likelihood of irrelevant or generic responses. For example, instead of asking, 'Describe renewable energy,' you could say, 'List three advantages of solar energy compared to fossil fuels.' Narrowing the scope of your request reduces the likelihood of irrelevant or generic responses. For example, instead of asking, 'Describe renewable energy,' you could say, 'List three advantages of solar energy compared to fossil fuels.' Relevance: Align your prompt with the AI model's capabilities. Understanding the strengths and limitations of the system is crucial for crafting prompts that yield meaningful results. For example, some models excel at creative writing, while others are better suited for technical analysis. By applying these principles, you can create prompts that are actionable and precise, leading to more effective and reliable outputs. Prompt Engineering Guide : From Beginner to Advanced Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in prompt writing. Advanced Techniques for Refining Prompts Refining prompts is an iterative process that involves testing and improving their effectiveness. Advanced techniques can help you fine-tune prompts for greater precision and relevance, especially when working on complex tasks. Iterative Adjustments: Analyze the AI's initial responses to identify areas for improvement. If the output is too vague, revise the prompt to include more detailed instructions. For example, instead of 'Explain climate change,' you might say, 'Explain the primary causes of climate change and their impact on global ecosystems.' Analyze the AI's initial responses to identify areas for improvement. If the output is too vague, revise the prompt to include more detailed instructions. For example, instead of 'Explain climate change,' you might say, 'Explain the primary causes of climate change and their impact on global ecosystems.' Contextual Framing: Adding context or constraints to your prompt can guide the AI toward more accurate and relevant responses. For instance, specifying 'Assume the audience is unfamiliar with technical jargon' helps the AI tailor its output for a non-technical audience. Adding context or constraints to your prompt can guide the AI toward more accurate and relevant responses. For instance, specifying 'Assume the audience is unfamiliar with technical jargon' helps the AI tailor its output for a non-technical audience. Layered Prompts: For complex tasks, use a series of prompts to guide the AI step by step. For example, start with 'Create an outline for a report on renewable energy,' followed by 'Expand on each section of the outline with detailed explanations.' These techniques allow you to refine prompts systematically, making sure that the AI delivers outputs that meet your expectations. Strategies for Iterative Optimization Prompt optimization is a continuous process that involves experimentation and refinement. A systematic approach can help you develop prompts that consistently deliver high-quality results. Experiment with Variations: Test different phrasing, formats, and structures to determine which version produces the best results. For example, compare the effectiveness of an open-ended question versus a directive statement for the same task. Test different phrasing, formats, and structures to determine which version produces the best results. For example, compare the effectiveness of an open-ended question versus a directive statement for the same task. Maintain a Prompt Log: Keep a record of prompts and their corresponding outputs. This helps you track what works, identify patterns, and build a library of effective prompts for future use. Keep a record of prompts and their corresponding outputs. This helps you track what works, identify patterns, and build a library of effective prompts for future use. Evaluate Outputs: Assess the AI's responses based on criteria such as relevance, coherence, and completeness. For instance, if the goal is to generate a persuasive argument, check whether the output includes logical reasoning, evidence, and a clear conclusion. By following these strategies, you can refine your prompts over time, making sure consistent and reliable performance from the AI. Addressing Common Challenges in Prompt Engineering Even with careful crafting, prompts may sometimes fail to produce satisfactory results. Understanding common challenges and their solutions can help you troubleshoot effectively. Vague or Irrelevant Outputs: Revisit the prompt's clarity and specificity. Ensure the instructions are explicit and provide additional context if needed. For example, instead of 'Describe this topic,' specify, 'Describe the benefits of renewable energy with three examples.' Revisit the prompt's clarity and specificity. Ensure the instructions are explicit and provide additional context if needed. For example, instead of 'Describe this topic,' specify, 'Describe the benefits of renewable energy with three examples.' Overly Generic Responses: Add constraints or request more detail. For instance, instead of 'Explain renewable energy,' you could say, 'Explain renewable energy with a focus on solar and wind power.' Add constraints or request more detail. For instance, instead of 'Explain renewable energy,' you could say, 'Explain renewable energy with a focus on solar and wind power.' Task Complexity: Break down large tasks into smaller, manageable components. For example, instead of asking the AI to 'Write a detailed report,' divide the task into sections, such as 'Create an outline' and 'Expand on each section.' By addressing these challenges systematically, you can refine your prompts to achieve better outcomes and more precise results. Maximizing the Potential of AI Models To fully use AI models, it is essential to align your prompts with the model's strengths. Some models excel at creative tasks, such as storytelling or brainstorming, while others are better suited for analytical or technical challenges. Familiarize yourself with the specific capabilities of the AI system you are using and tailor your prompts accordingly. Additionally, staying informed about advancements in AI technology can help you adapt your prompt engineering techniques. As models evolve, new features and capabilities may become available, offering opportunities to enhance your interactions with AI systems. By combining a deep understanding of the model's capabilities with effective prompt engineering techniques, you can maximize the value of AI in your work and achieve superior outcomes. Media Credit: Matthew Berman Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


TechCrunch
07-06-2025
- Business
- TechCrunch
Superblocks CEO: How to find a unicorn idea by studying AI system prompts
Brad Menezes, CEO of enterprise vibe coding startup Superblocks, believes the next crop of billion-dollar startup ideas are hiding in almost plain sight: the system prompts used by existing unicorn AI startups. System prompts are the lengthy prompts — over 5,000-6,000 words — that AI startups use to instruct the foundational models from companies like OpenAI or Anthropic on how to generate their application-level AI products. They are, in Menezes view, like a master class in prompt engineering. 'Every single company has a completely different system prompt for the same [foundational] model,' he told TechCrunch. 'They're trying to get the model to do exactly what's required for a specific domain, specific tasks.' System prompts aren't exactly hidden. Customers can ask many AI tools to share theirs. But they aren't always publicly available. So as part of his own startup's new product announcement of an enterprise coding AI agent named Clark, Superblocks offered to share a file of 19 system prompts from some of the most popular AI coding products like Windsurf, Manus, Cursor, Lovable and Bolt. Menezes's tweet went viral, viewed by almost 2 million including big names in the Valley like Sam Blond, formerly of Founders Fund and Brex, and Aaron Levie, a Superblocks investor. Superblocks announced last week that it raised a $23 million Series A, bringing its total to $60 million for its vibe coding tools geared to non-developers at enterprises. So we asked Menezes to walk us through how to study other's system prompts to glean insights. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW 'I'd say the biggest learning for us building Clark and reading through the system prompts is that the system prompt itself is maybe 20% of the secret sauce,' Menezes explained. This prompt gives the LLM the baseline of what to do. The other 80% is 'prompt enrichment' he said, which is the infrastructure a startup builds around the calls to the LLM. That part includes instructions it attaches to a user's prompt, and actions taken when returning the response, such as checking for accuracy. He said there are three parts of system prompts to study: role prompting, contextual prompting, and tool use. The first thing to notice is that, while system prompts are written in natural language, they are exceptionally specific. 'You basically have to speak as if you would to a human co-worker,' Menezes said. 'And the instructions have to be perfect.' Role prompting helps the LLMs be consistent, giving both purpose and personality. For instance, Devin's begins with, 'You are Devin, a software engineer using a real computer operating system. You are a real code-wiz: few programmers are as talented as you at understanding codebases, writing functional and clean code, and iterating on your changes until they are correct.' Contextual prompting gives the models the context to consider before acting. It should provide guardrails that can, for instance, reduce costs and ensure clarity on tasks. Cursor's instructs, 'Only call tools when needed, and never mention tool names to the user — just describe what you're doing. … don't show code unless asked. … Read relevant file content before editing and fix clear errors, but don't guess or loop fixes more than three times.' Tool use enables agentic tasks because it instructs the models how to go beyond just generating text. Replit's, for instance, is long and describes editing and searching code, installing languages, setting up and querying PostgreSQL databases, executing shell commands and more. Studying others' system prompts helped Menezes see what other vibe coders emphasized. Tools like Loveable, V0, and Bolt 'focus on fast iteration,' he said, whereas 'Manus, Devin, OpenAI Codex, and Replit' help users create full-stack applications but 'the output is still raw code.' Menezes saw an opportunity to let non-programmers write apps, if his startup could handle more, such as security and access to enterprise data sources like Salesforce. While he's not yet running the multi-billion startup of his dreams, Superblock has landed some notable companies as customers, it said, including Instacart and Paypaya Global. Menezes is also dogfooding the product internally. His software engineers are not allowed to write internal tools; they can only build the product. So his business folks have built agents for all their needs, like one that uses CRM data to identify leads, one that tracks support metrics, another that balance the assignments of the human sales engineers. 'This is basically a way for us to build the tools and not buy the tools,' he sais.


Forbes
03-06-2025
- Business
- Forbes
Forget Prompting. To Win In The AI Age, You Must ASK
OpenAI CEO Sam Altman says that the prompting tricks that many people used in 2023 are no longer ... More relevant. (Photo by Didem Mente/Anadolu) In 2023, the global prompt-engineering market was valued at $222.1 million and projected to expand at a compound annual growth rate (CAGR) of 32.8% from 2024–30. In early 2025, Sam Altman, CEO of OpenAI, said that 'the prompting tricks that many people used in 2023 are no longer relevant, and some of them will never be needed again.' Were we too quick to predict prompt-engineering a great future? According to Altman, the answer is yes. In Adam Grant's Re-thinking podcast, he said that figuring out what questions to ask will soon be more important than figuring out the answer. Although it wasn't clear what Altman meant by 'figuring out what questions to ask', it was clear that he wasn't talking about prompting. While the dictionary defines prompt-engineering as the process of designing inputs for generative AI models to deliver useful, accurate, and relevant responses, the process of figuring out what questions to ask is way harder to define. In a recent TED Talk, Perplexity CEO Aravind Srinivas described our innate curiosity and relentless questioning as a 'human quality that makes us so human.' But he didn't give a definition of asking, let alone a guide for figuring out what questions to ask. AI executives like Altman and Srinivas seem to agree that the most valuable skill in the age of AI is neither prompt-engineering, IQ, EQ, or adaptability. It's the 'human quality' of figuring out what to ask. To understand and unlock this human quality, however, we do not get much help from the AI executives. Although it isn't clear what Altman means when he says that figuring out what questions to ask is ... More more important than figuring out the answers, it is clear that he is not talking about prompting. Photo from a panel discussion titled "The Age of AI" at the Technical University of Berlin on February 07, 2025. (Photo by Sean Gallup) In my LinkedIn Learning course on how to unlock your question mindset to think clearly and navigate uncertainty, I make a fundamental distinction between speaking clearly and thinking clearly. While speaking clearly is about expressing and explaining something that you already know well, thinking clearly is about exploring and experimenting with something that you don't know – yet. Just as you can't speak clearly unless you know what you want to say, prompting requires you to know what kind of answers you're looking for. You must design your prompt in a way that makes it possible for AI to deliver useful, accurate, and relevant responses. And in order for you to do that, you not only need to know what it means for an answer to be useful, accurate, and relevant, you also need to adjust your input to the machine. In short, to be good at prompting, you must be good at adapting what you already know to what the machine can already do. With asking, it's the other way around. Just as you can't think clearly unless you're open to new insights and ideas, you can't figure out what to ask if you think you already know the answer. To ask, you must be willing to be wrong – about what a useful, accurate, and relevant answer is, but also about everything else. And in order for you to do that, you not only have to acknowledge that you don't know the answer, you also have to accept the possibility that there are no good answers. In short, to be good at asking, you must be good at continuing – and being content – with asking. Srinivas seemed to reach a somewhat similar conclusion in his TED Talk when he said, 'We are all curious and when we are curious, we want answers. We really do. But what we really want are those answers that lead us to the next set of questions.' But where does that leave you? For Srinivas and other AI executives, it leads to a discussion of the future of technology: 'With all of the world's answers available to us,' Srinivas said, 'the tools we use to ask our questions, and the stuff that we build using those answers, those to me are the future of our technology.' In a recent TED Talk, Perplexity CEO Aravind Srinivas described our relentless questioning as a ... More 'human quality that makes us so human.' Photo: Srinivas speaks during the Semafor 2024 World Economy Summit in Washington, DC on April 18, 2024. (Photo by SAUL LOEB) But are the tools that Srinivas and others are building really designed for you to ask questions? Or are they designed for you to adapt what you already know to what the machines can already do? By not distinguishing between prompting and asking questions, AI executives are not making it easier for you to understand and unlock your 'human quality' of asking questions. Rather, they make it harder for you and everyone else to remember what asking questions is really about – that is: The ASK acronym is not derived from 'the future of our technology'. It is derived from the past and present of our humanity. More specifically, it is derived from philosophy's 2,400 years of experience in asking the existential, ethical, and epistemological questions that no one – least of all a machine – can answer for you. These are the questions that help you figure out who you are, what is the right thing to do, and how you deal with what you (don't) know. They typically present themselves as: Existential doubt or crises, e.g. 'Who am I if I cannot have the career, I thought I would have?' 'Can I do bad things and still be a good person?' 'Will I still be the same if I change how I live my life?' Ethical dilemmas, e.g. 'Should I still pursue this opportunity now that I know it will have a negative impact on other people?' 'What consequences will it have if I choose not to speak up?' 'Would I expect others to take action if they knew what I know?'Epistemological challenges, e.g. 'Is it responsible to make this decision when I lack important information?' 'How much of what I think I know is based on assumptions that I ought to test before I move on?' 'Could I be wrong?' Asking these kinds of questions of a tool built with the future of technology in mind may help you 'build stuff', but it won't help you live with the fact that sometimes there are no clear answers. And when that is the case, it doesn't matter how good you are at prompting. All that matters is whether or not you are willing to ASK. So, maybe that's what Altman meant when he said that figuring out what questions to ask will soon be more important than figuring out the answer? Maybe that's what it takes to win in the AI age: To stop prompting and start ASK-ing?