logo
#

Latest news with #DartmouthSummerResearchProject

The illusion of control: How prompting Gen AI reroutes you to 'average'
The illusion of control: How prompting Gen AI reroutes you to 'average'

Campaign ME

time26-05-2025

  • Campaign ME

The illusion of control: How prompting Gen AI reroutes you to 'average'

After speaking on a recent AI panel and hearing the same questions come up again and again, I realised something simple but important: most people don't actually understand the difference between artificial intelligence and the kind of AI we interact with every day. The tools we use, ChatGPT, image generators, and writing assistants, aren't just 'AI'. They're generative AI (GenAI), a very specific category of machine learning built to generate content by predicting what comes next. As someone who moves between research and creative work, I don't see GenAI as a magic tool. I see it more like a navigation system. Every time we prompt it, we're giving it directions, but not on an open map. We're working with routes that have already been traveled, mapped, and optimised by everyone who came before us. The more people follow those routes, the more paved and permanent they become. So while it may feel like you're exploring something new, most of the time you're being rerouted through the most popular path. Unless you understand how the model was trained, how it predicts, and what its limitations are, you'll keep circling familiar ground. That's why I believe we need to stop treating Gen AI like cruise control and start learning how it actually works. If your prompts have ever felt like they're taking you in loops, you're not imagining it, you're just following a road that was already laid. Let's look at where it came from, how GenAI works, and what it means when most of our roads lead to the same place. History: From logic machines to language models The term artificial intelligence was coined in 1956 at the Dartmouth Summer Research Project. Early AI systems focused on symbolic reasoning and logical problem-solving but were constrained by limited computing power. Think of the cryptography machine in Morten Tyldum's 2014 movie, The Imitation Game. These limitations contributed to the first AI winter in the 1970s, when interest and funding declined sharply. By the early 2000s, advances in computing power, algorithm development, and data availability ushered in the big data era. AI transitioned from theoretical models to practical applications, automating structured data tasks such as recommendation engines like Amazon's e-commerce and that of Netflix, early social media ranking algorithms, and predictive text tools like Google's autocomplete. A transformative milestone came in 2017 when Google researchers introduced the Transformer architecture in the seminal paper Attention Is All You Need. This innovation led to the development of large language models (LLMs) and foundational structures of today's generative AI systems. Functionality: How Gen AI thinks in averages Everything begins with the training data: massive amounts of text, cleaned, filtered, and then broken down into small parts called tokens. A token might be a whole word, a piece of a word, or even punctuation. Each token is assigned a numerical ID, which means the model doesn't actually read language, it processes streams of numbers that stand in for language. Once tokenised, the model learns by predicting the next token in a sequence, over and over, across billions of examples. But not all data is treated equally. Higher-quality sources, like curated books or peer-reviewed articles, are weighted more heavily than casual internet text. This influences how often certain token patterns are reinforced. So, if a phrase shows up repeatedly in high-quality contexts, the model is more likely to internalise that phrasing as a reliable pattern. Basically, it learns what an 'average' response looks like, not the mathematical average, but by converging on the most statistically stable continuation. This averaging process isn't limited to training. It shows up again when you use the model. Every prompt you enter is converted into tokens, passed through layers of the model where each token is compared with every other using what's called self-attention, a kind of real-time weighted averaging of context. These weightings are not revealed to the user prompting. The model then outputs the token it deems most probable, based on all the patterns it has seen. This makes the system lean heavily toward the median, the safe middle of the distribution. It's why answers often feel polished but cautious, they're optimised to avoid being wrong by aiming for what is most likely to be right. You can change the 'averaging' with a setting called temperature, which controls how sharply the model focuses on the median results. At low temperature, the model stays close to the statistical center: safe, predictable, and a bit dull. As you raise the temperature, the model starts scattering probabilities away from the median, allowing less common, more surprising tokens to slip in. But with that variation comes volatility. When the model output moves away from the centre of the distribution, you get randomness, not necessarily creativity. So whether in training or in real-time generation, Gen AI is built to replicate the middle. Its intelligence, if we can call it that, lies in its ability to distill billions of possibilities into one standardised output. And while that's incredibly powerful, it also reveals the system's fundamental limit: it doesn't invent meaning, it averages it. Gen AI prompting: Steering the system without seeing the road Prompting isn't just about asking a question, it's about narrowing in on the exact statistical terrain the model has mapped during training. When we write a prompt, we are navigating through token space, triggering patterns the model has seen before, and pulling from averages baked into the system. The more specific the prompt, the tighter the clustering around certain tokens and their learned probabilities. But we often forget that the user interface is smoothing over layers of complexity. We don't see the weighted influences of our word choices or the invisible temperature settings shaping the randomness of the response. These models are built to serve a general audience, another kind of average, and that makes it even harder to steer them with precision. So while it may feel like prompting is open-ended, it's really about negotiating with invisible distributions and system defaults that are doing a lot more deciding than we think. Prompt frameworks like PICO (persona, instructions, context, output) or RTF (role, task, format) can help shape structure, but it's worth remembering, they, too, are built around assumptions of what works most of the time for most people. That's still an average. Sometimes you'll get lucky and the model's output will land above your own knowledge, it will sound brilliant, insightful, maybe even novel. But the moment you hand it to someone deep in the subject, it becomes obvious: it sounds like AI. That's the trick, understanding the average you're triggering and knowing whether it serves your purpose. Who will read this? What will they expect? What level of depth or originality do they need? That's what should shape your prompt. Whether you use a structured framework, or just write freely, what matters is clarity about the target and awareness of the terrain you're pulling from. And sometimes, the best move is tactical: close the chat, open a fresh window. The weight of previous tokens, cached paths, and context history might be skewing everything. It's not your fault. The averages just got noisy. Start again, recalibrate, and aim for a better median. Conclusion: When the average becomes the interface One of the things that worries me is how the companies behind GenAI are learning to optimise for the average. The more people use these tools with prompt engineering templates and frameworks, the more the system starts shaping itself around those patterns. These models are trained to adapt, and what they're adapting to is us, our habits, our shortcuts, our structured formats. So what happens when the interface itself starts reinforcing those same averages? It becomes harder to reach anything outside the probable, the expected, the familiar. The weird, the original, the statistically unlikely, those start to fade into the background. This becomes even more complicated when we look at agentic AI, the kind that seems to make decisions or deliver strong outputs on its own. It can be very convincing. But here's the issue: it's still built on averages. We risk handing over not just the task of writing or researching, but the act of thinking itself. And when the machine is tuned to reflect what's most common, we're not just outsourcing intelligence, we're outsourcing our sense of nuance, our ability to hold an opinion that doesn't sit neatly in the middle. So the next time an AI gives you something that feels weirdly brilliant or frustratingly obvious, stop and consider what's really happening. It's not inventive. It's navigating, pulling from the most common, most accepted, most repeated paths it's seen before. Your prompt triggered that route, and that route reflects the prompts of thousands of others like you. Once you understand that, you can start steering more intentionally. You can recognise when your directions are being rerouted through popular lanes and when it's time to get off the highway. And sometimes, when the output is so average it feels broken, the smartest move is simple: close the window, reset the route, and start over. Because every now and then, the only way to find something new is to stop following the crowd. By Hiba Hassan, Head of the Design and Visual Communications Department, SAE Dubai

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store