logo
The illusion of control: How prompting Gen AI reroutes you to 'average'

The illusion of control: How prompting Gen AI reroutes you to 'average'

Campaign ME26-05-2025

After speaking on a recent AI panel and hearing the same questions come up again and again, I realised something simple but important: most people don't actually understand the difference between artificial intelligence and the kind of AI we interact with every day.
The tools we use, ChatGPT, image generators, and writing assistants, aren't just 'AI'. They're generative AI (GenAI), a very specific category of machine learning built to generate content by predicting what comes next.
As someone who moves between research and creative work, I don't see GenAI as a magic tool. I see it more like a navigation system. Every time we prompt it, we're giving it directions, but not on an open map. We're working with routes that have already been traveled, mapped, and optimised by everyone who came before us.
The more people follow those routes, the more paved and permanent they become. So while it may feel like you're exploring something new, most of the time you're being rerouted through the most popular path. Unless you understand how the model was trained, how it predicts, and what its limitations are, you'll keep circling familiar ground.
That's why I believe we need to stop treating Gen AI like cruise control and start learning how it actually works. If your prompts have ever felt like they're taking you in loops, you're not imagining it, you're just following a road that was already laid. Let's look at where it came from, how GenAI works, and what it means when most of our roads lead to the same place.
History: From logic machines to language models
The term artificial intelligence was coined in 1956 at the Dartmouth Summer Research Project. Early AI systems focused on symbolic reasoning and logical problem-solving but were constrained by limited computing power. Think of the cryptography machine in Morten Tyldum's 2014 movie, The Imitation Game. These limitations contributed to the first AI winter in the 1970s, when interest and funding declined sharply.
By the early 2000s, advances in computing power, algorithm development, and data availability ushered in the big data era. AI transitioned from theoretical models to practical applications, automating structured data tasks such as recommendation engines like Amazon's e-commerce and that of Netflix, early social media ranking algorithms, and predictive text tools like Google's autocomplete.
A transformative milestone came in 2017 when Google researchers introduced the Transformer architecture in the seminal paper Attention Is All You Need. This innovation led to the development of large language models (LLMs) and foundational structures of today's generative AI systems.
Functionality: How Gen AI thinks in averages
Everything begins with the training data: massive amounts of text, cleaned, filtered, and then broken down into small parts called tokens. A token might be a whole word, a piece of a word, or even punctuation. Each token is assigned a numerical ID, which means the model doesn't actually read language, it processes streams of numbers that stand in for language.
Once tokenised, the model learns by predicting the next token in a sequence, over and over, across billions of examples. But not all data is treated equally. Higher-quality sources, like curated books or peer-reviewed articles, are weighted more heavily than casual internet text. This influences how often certain token patterns are reinforced.
So, if a phrase shows up repeatedly in high-quality contexts, the model is more likely to internalise that phrasing as a reliable pattern. Basically, it learns what an 'average' response looks like, not the mathematical average, but by converging on the most statistically stable continuation. This averaging process isn't limited to training. It shows up again when you use the model.
Every prompt you enter is converted into tokens, passed through layers of the model where each token is compared with every other using what's called self-attention, a kind of real-time weighted averaging of context. These weightings are not revealed to the user prompting. The model then outputs the token it deems most probable, based on all the patterns it has seen.
This makes the system lean heavily toward the median, the safe middle of the distribution. It's why answers often feel polished but cautious, they're optimised to avoid being wrong by aiming for what is most likely to be right.
You can change the 'averaging' with a setting called temperature, which controls how sharply the model focuses on the median results. At low temperature, the model stays close to the statistical center: safe, predictable, and a bit dull.
As you raise the temperature, the model starts scattering probabilities away from the median, allowing less common, more surprising tokens to slip in. But with that variation comes volatility. When the model output moves away from the centre of the distribution, you get randomness, not necessarily creativity.
So whether in training or in real-time generation, Gen AI is built to replicate the middle. Its intelligence, if we can call it that, lies in its ability to distill billions of possibilities into one standardised output. And while that's incredibly powerful, it also reveals the system's fundamental limit: it doesn't invent meaning, it averages it.
Gen AI prompting: Steering the system without seeing the road
Prompting isn't just about asking a question, it's about narrowing in on the exact statistical terrain the model has mapped during training. When we write a prompt, we are navigating through token space, triggering patterns the model has seen before, and pulling from averages baked into the system.
The more specific the prompt, the tighter the clustering around certain tokens and their learned probabilities. But we often forget that the user interface is smoothing over layers of complexity. We don't see the weighted influences of our word choices or the invisible temperature settings shaping the randomness of the response.
These models are built to serve a general audience, another kind of average, and that makes it even harder to steer them with precision. So while it may feel like prompting is open-ended, it's really about negotiating with invisible distributions and system defaults that are doing a lot more deciding than we think.
Prompt frameworks like PICO (persona, instructions, context, output) or RTF (role, task, format) can help shape structure, but it's worth remembering, they, too, are built around assumptions of what works most of the time for most people. That's still an average. Sometimes you'll get lucky and the model's output will land above your own knowledge, it will sound brilliant, insightful, maybe even novel. But the moment you hand it to someone deep in the subject, it becomes obvious: it sounds like AI.
That's the trick, understanding the average you're triggering and knowing whether it serves your purpose. Who will read this? What will they expect? What level of depth or originality do they need? That's what should shape your prompt. Whether you use a structured framework, or just write freely, what matters is clarity about the target and awareness of the terrain you're pulling from.
And sometimes, the best move is tactical: close the chat, open a fresh window. The weight of previous tokens, cached paths, and context history might be skewing everything. It's not your fault. The averages just got noisy. Start again, recalibrate, and aim for a better median.
Conclusion: When the average becomes the interface
One of the things that worries me is how the companies behind GenAI are learning to optimise for the average. The more people use these tools with prompt engineering templates and frameworks, the more the system starts shaping itself around those patterns. These models are trained to adapt, and what they're adapting to is us, our habits, our shortcuts, our structured formats.
So what happens when the interface itself starts reinforcing those same averages? It becomes harder to reach anything outside the probable, the expected, the familiar. The weird, the original, the statistically unlikely, those start to fade into the background.
This becomes even more complicated when we look at agentic AI, the kind that seems to make decisions or deliver strong outputs on its own. It can be very convincing. But here's the issue: it's still built on averages. We risk handing over not just the task of writing or researching, but the act of thinking itself. And when the machine is tuned to reflect what's most common, we're not just outsourcing intelligence, we're outsourcing our sense of nuance, our ability to hold an opinion that doesn't sit neatly in the middle.
So the next time an AI gives you something that feels weirdly brilliant or frustratingly obvious, stop and consider what's really happening. It's not inventive. It's navigating, pulling from the most common, most accepted, most repeated paths it's seen before. Your prompt triggered that route, and that route reflects the prompts of thousands of others like you.
Once you understand that, you can start steering more intentionally. You can recognise when your directions are being rerouted through popular lanes and when it's time to get off the highway. And sometimes, when the output is so average it feels broken, the smartest move is simple: close the window, reset the route, and start over. Because every now and then, the only way to find something new is to stop following the crowd.
By Hiba Hassan, Head of the Design and Visual Communications Department, SAE Dubai

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AdLift Announces the Launch of Tesseract
AdLift Announces the Launch of Tesseract

Martechvibe

time2 days ago

  • Martechvibe

AdLift Announces the Launch of Tesseract

Tesseract delivers actionable insights for AI-savvy marketing strategies, whether it's identifying brand mentions in ChatGPT outputs or assessing visibility in Google's AI Overviews. Topics News Share Share AdLift Announces the Launch of Tesseract Whatsapp Linkedin AdLift has announced the launch of Tesseract. Tesseract is a tool designed to help brands, agencies, and marketers track and amplify their presence across the rapidly expanding landscape of Large Language Model (LLM) powered search platforms, such as ChatGPT, Gemini, Google AI Overviews, and Perplexity. AdLift Inc., now part of Liqvd Asia, has been at the forefront of innovation, bringing together talent to deliver the best solutions. With Tesseract, they're taking AI-powered marketing to the next level. As AI reshapes the way consumers find and interact with content, traditional SEO methods are fast becoming obsolete. This technology is built to unlock this new frontier, giving brands real-time visibility into how they are being discovered and represented within AI-powered responses. It helps marketers to not only monitor but also optimise their digital footprint where it counts—in the very engines powering the next generation of search. 'Search is undergoing a seismic shift. The dominance of traditional search engines is being challenged by AI-native platforms that interpret and present information differently,' said Prashant Puri, CEO & Co-Founder of AdLift Inc. 'Brands that don't adapt risk becoming invisible in this new landscape. Tesseract is our answer to this challenge—a revolutionary tool that puts brands back in control of their digital destiny.' ALSO READ: Unlike legacy SEO platforms, Tesseract decodes how LLMs display, prioritise, and contextualise brand content. Whether it's identifying brand mentions in ChatGPT outputs or assessing visibility in Google's AI Overviews, the platform delivers actionable insights for AI-savvy marketing strategies. 'AI agents are the future, and businesses are seeing the transformation since their introduction. There's a massive opportunity across industries, and with the Tesseract tool, we are proud to enjoy the first mover advantage of this service,' said Arron Goodin, Managing Director, AdLift Inc. 'As an agency, we are committed towards innovations, helping our clients and building a competitive edge with enhanced efficiency and deeper industry insights.' Arnab Mitra, Founder & Managing Director of Liqvd Asia, commented, 'At Liqvd Asia, innovation is our core. With Tesseract, we're not just responding to the AI revolution—we're shaping it.' 'This product reflects our commitment to empowering brands with cutting-edge solutions that anticipate the future of digital marketing. We believe Tesseract will be a game-changer, enabling brands to thrive in an AI-first world where visibility means everything.' By launching Tesseract, AdLift reaffirms its commitment to pushing the boundaries of digital innovation. ALSO READ:

Why Most Businesses Are Still Struggling to Win with AI
Why Most Businesses Are Still Struggling to Win with AI

Martechvibe

time2 days ago

  • Martechvibe

Why Most Businesses Are Still Struggling to Win with AI

AI is everywhere. It's the cornerstone of transformation roadmaps, the centrepiece of boardroom conversations, and increasingly, the north star of enterprise innovation. A recent Qlik study revealed that 86% of senior executives say AI is now central to their organisation's business strategy. Yet, only a small fraction are seeing the meaningful business outcomes they hoped for. A parallel report by Kyndryl paints a similar picture. Despite the enthusiasm, most businesses are still stuck in the early phases of AI maturity. Only 5% of organisations are considered 'AI Pacesetters'— those that successfully use AI at scale and see significant returns. So what's going wrong? The AI Execution Gap Is Real Both reports point to a sobering truth: strategy alone isn't enough. There's a wide and growing gap between AI ambition and AI execution, and it's costing companies time, money, and competitive advantage. 'Organisations clearly recognise that merely investing in AI is insufficient; what matters now is delivering tangible outcomes. Yet, as our research underscores, the road to production AI remains blocked by persistent hurdles—cost, complexity, and data fragmentation,' said Mike Capone, CEO of Qlik. Closing the AI execution gap requires more than aspiration—it demands practical solutions that simplify data integration, ensure governance, and empower better decision-making. This pressure is even more intense with generative AI dominating leadership agendas. The pace of genAI evolution amplifies organisational anxiety and widens the gap between intent and capability. Eric Hanselman, Chief Analyst at S&P Global Market Intelligence, said, 'The fast-evolving GenAI landscape pressures enterprises to move swiftly, sometimes sacrificing caution as they strive to stay competitive. Many are deploying GenAI tools before fully understanding their implications, especially with the surge of SaaS platforms embedding genAI capabilities.' Recent research from S&P, 'The 2025 Thales Data Threat Report' revealed that nearly 70% of organisations consider the fast pace of generative AI development the leading challenge tied to AI adoption, followed by concerns over integrity (64%) and trustworthiness (57%). Enterprises are leveraging AI to accelerate product development, enhance CXs, improve training, speed drug discovery, and optimise operations. However, the rapid adoption of genAI introduces complex challenges that organisations must navigate carefully. The Five Core Blockers Include: 1. Workforce Inertia and Fear Kyndryl found that 71% of leaders believe their workforce isn't ready to adopt AI. 45% say there's active resistance or even fear of job displacement due to AI. 2. Talent and Skills Shortages Over half of the organisations surveyed (51%) admit they lack the necessary AI-skilled talent to scale effectively. Many are not investing fast enough in reskilling or change management. 3. Data Complexity Qlik's report shows most organisations are bogged down by fragmented data systems, legacy infrastructure, and inconsistent governance models. Nearly 80% of respondents say these issues are their biggest barriers to realising AI's full potential. 4. Leadership Disconnects

Conductor Launches Conductor AI
Conductor Launches Conductor AI

Martechvibe

time2 days ago

  • Martechvibe

Conductor Launches Conductor AI

Conductor AI provides marketing teams with streamlined workflows designed to maintain productivity and support ongoing competitiveness. Topics News Share Share Conductor Launches Conductor AI Whatsapp Linkedin Conductor has announced the launch of Conductor AI. Conductor AI is purpose-built to empower marketing and digital teams with everything they need to ensure their brand is discoverable in generative AI answer engines (e.g. ChatGPT, Perplexity, Gemini) and traditional search. As search transforms, Conductor is delivering the clarity and power enterprises need. 'We're at the starting line of the biggest change in search since Conductor was founded,' said Seth Besmertnik, Co-Founder and CEO of Conductor. 'While the landscape is evolving rapidly, the opportunity for brands to connect with customers is greater than ever. We're saying goodbye to blue links. Instead, AI engines are speaking on your behalf – influenced by your brand's content– making your content more important than ever.' Conductor AI's Key Innovations: Track Visibility Across AI Engines: Conductor AI measures visibility and market share across AI platforms using methods aligned with how engines like ChatGPT, Perplexity, and Gemini generate responses, offering accurate, topic-level insights. Identify Critical Content Gaps : Using an AI-driven topical authority map, the platform identifies gaps by analysing existing content and assessing authority across topic areas. Automatically Generates Content : A built-in content copilot combines live industry data with competitor and AI visibility signals to support the creation of content that aligns with LLM ranking behaviours. ALSO READ: Score and Predicts AI Impacts: Content is scored for AI performance potential, helping maintain quality and relevance at scale. Built-in tools support compliance, question coverage, and AI bot optimisation. Automate Internal Linking: Conductor AI scans site content and automatically suggests relevant internal links, reducing manual effort while improving site structure and discoverability. Monitor AI Indexing in Real-Time : Real-time tracking shows when AI bots engage with content, providing visibility into indexing and helping optimise pages for better discoverability across AI search platforms. Conductor AI provides marketing teams with streamlined workflows designed to maintain productivity and support ongoing competitiveness. With AI techniques such as retrieval augmented generation (RAG) integrated, the platform offers the contextual depth needed to generate clear, actionable insights. Wei Zheng, Chief Product Officer at Conductor, said, 'Great AI requires great data, it's a simple truth. Conductor has 10+ years of data and insights on what makes content successful in search.' 'That also means 10+ years of experience learning how to leverage this data – transforming it into actionable content strategies and feedback on how they perform, so we can continuously refine and optimise for even better results.' 'With our new unified dataset making it all instantly retrievable, purpose-built prompts specially engineered for AI visibility and authority, real-time monitoring on top, we are redefining what's possible when it comes to content quality and insight accuracy.' Conductor AI represents a step forward, continuing Conductor's established role as an innovator. The aim is to build a platform capable of handling the research, content creation, reporting, and monitoring functions needed by leading global brands. ALSO READ:

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store