logo
Is Google's search monopoly ending? What this could mean for digital marketing

Is Google's search monopoly ending? What this could mean for digital marketing

IOL News24-06-2025
DeepSeek and other entrants are challenging the search dominance of Google.
Image: Solen Feyissa/Unsplash
Earlier this month, I found myself picking at something that's been nagging at me of late. So I did what any insight-seeking strategist does these days - I asked X: "Who else is currently pondering answer engine and AI agent optimisation?"
The response from Ross Simmonds, the founder of Canadian B2B marketing agency Foundation and author of Create Once, Distribute Forever: How Great Creators Spread Their Ideas and How You Can Too, was immediate: a wave emoji. What ensued was a conversation that crystallises something you might be sensing.
How we got here
For the past two decades, Google has essentially owned the internet's front door. Here's how their empire worked: you searched for something, Google showed you ten blue links surrounded by adverts. If you wanted your business to appear in those results, you played by Google's rules - either through search engine optimisation (SEO), where you twisted your content to please Google's algorithms, or through AdWords, where you paid to appear at the top.
This system shaped everything. Entire industries sprang up around gaming Google's preferences. Content creators wrote for robots first, humans second. Marketing budgets poured into deciphering what Google wanted, then delivering it.
Now that's changing. Instead of ten blue links, we're getting direct answers from AI systems like ChatGPT, Google Gemini, Perplexity, and dozens of others, including newer open source entrants like DeepSeek. Ask "What's the capital of Mali?" and these tools simply tell you "Bamako" rather than sending you to Wikipedia or trying to sell you a holiday package.
New game
But here's where it gets interesting, and where my conversation with Simmonds began. These new "answer engines" (as the digital content and marketing industries are starting to dub them) face the same fundamental challenge Google did: how do you make money from giving people information?
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Next
Stay
Close ✕
Ad loading
During our brief X exchange, I found myself describing what feels wrong about some of these new systems: "Imagine asking a shop assistant a basic question and instead of just answering, they stall - fishing for your intent, upselling alternatives, or quietly collecting your data to monetise your attention."
I get it, though. These companies have raised billions in funding. They've got cutting-edge infrastructure to pay for, staff to employ, shareholders to satisfy. The idealistic vision of "just answer the question" crashes into commercial reality pretty quickly.
Where it gets complicated
Simmonds reckons that there's going to be a split: "Information retrieval vs emotional connection. Many will rely on the AI to simply get information (i.e. how long should I bake my lasagne) but they'll rely on emotional channels (podcasts, reels, TikToks and YouTube) to understand 'how to make lasagne like a grandma from Tuscany.'"
This feels profound. We may well be creating two internet economies: one for facts, handled by machines a la AI agents, and another for meaning, still very much human territory.
Pattern recognition
I'm struck by my own experience developing and executing content strategies and tactical media plays for leading global organisations. Working on community-building assignments and ecosystem engagement projects, the most successful approaches weren't about gaming Google's algorithm or buying more AdWords.
They were about genuinely useful answers to real stakeholder questions, particularly from founders and investors, delivered through compelling media and meaningful in-person engagement.
But even then, I noticed that over-reliance on advertising channels like AdWords felt precarious. Not just because I've always been uncomfortable with hard-selling and hijacking people's attention, but because at some fundamental level, sustainable business happens between people who trust each other.
Commercial reality
Here's what I think is happening with these new AI systems, and why it matters for anyone trying to reach customers online: the companies building them are facing the same pressure Google did to figure out monetisation.
Some are optimising for keeping you on their platform longer. Others are cutting deals with specific information providers. Many are collecting detailed data about what you're asking to build advertising profiles.
We're already seeing the early signs: Perplexity's licensing deals with (mostly) Western publishers, WPP's digital marketing partnership with Claude (Anthropic), query limits for free users on various platforms, 'premium' answer tiers, and experiments with sponsored responses that prioritise certain sources over others.
Ultimately, for them, it's just business. And that means that these systems are developing their own biases and blind spots, just as Google's did.
The human element
By the end of our brief exchange, Simmonds and I found ourselves aligned on something: "...the lasting moat exists for people," he said. The technical systems will evolve to handle the mechanical aspects of information delivery, but human connection, cultural context, and authentic perspective remain irreplaceable.
It's not about choosing sides between human and artificial intelligence. It's about recognising that as these new systems reshape how information flows, the premium on genuine human insight - the kind that feels personally and culturally grounded - is only going to grow.
Google's two-decade reign over internet search might be ending, but the real question isn't who's won. It's what kind of information ecosystem we're building next, and whether we can do better than the attention-hijacking game that got us here in the first place.
Andile Masuku is Co-founder and Executive Producer at African Tech Roundup.
Image: Supplied
Andile Masuku is Co-founder and Executive Producer at African Tech Roundup. Connect and engage with Andile on X (@MasukuAndile) and via LinkedIn.
*** The views expressed here do not necessarily represent those of Independent Media or IOL.
BUSINESS REPORT
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI didn't kill writing — we did
AI didn't kill writing — we did

Mail & Guardian

time6 hours ago

  • Mail & Guardian

AI didn't kill writing — we did

(Graphic: John McCann/M&G) Being a grammar snob is so 2012 but I'm probably not alone in this one. Seeing the em dash (—) go mainstream wasn't on the cards for many years. But now, it's everywhere, tucked into every second article, LinkedIn sermon and long-form X post that wants to sound groundbreaking but falls flat. It wasn't always like this. Outside of yellowing novel pages, em dashes, especially those without spaces on either side, were prevalent in prestige American texts. They belonged in literary-leaning publications like The Atlantic, Vanity Fair, The New York Times, Pitchfork… publications with writers and editors who know how to break rhythm with elegance for a readership that gets it. Now the em dash has gone corporate. The issue is not that it's going mainstream, but how it's being used. Blame the machine It's ChatGPT's fault. And that of all other large language models (LLMs) like Google's Gemini and Elon Musk's Grok. For better or worse, LLMs are widely used in writing today. But, while they can assist, most LLM-generated text lacks that personal touch. It reads broad and glossy. Emotionally neutral and safe. You can often tell when a piece of writing has been LLM generated, not just by its tone, but by its punctuation. The em dash appears like a glitch in the code, levitating where a comma, semicolon, colon or even just a space would've worked better. It's like narrative duct tape — functional but overused. And yet, humans are letting it slide. Or worse, they are copying the style without realising why. The popularity of AI in SA ChatGPT is the fifth-most visited website in South Africa, after Google, YouTube, Facebook and, sadly, Hollywoodbets. Globally, it received 4.7 billion visits in April alone, up 51% from just two months earlier. In the AI search space, ChatGPT now commands over 80% of the traffic. People are using it to write everything from school essays to corporate blogs and press releases. Financial challenges, rife in the media space all over the world, are pushing publications to quietly consider the use of AI-generated articles. I've heard the whispers and read the actual texts. One never knows the prompts people are feeding LLMs, but the output speaks volumes. Whole articles, captions and bios that sound templated. You can feel the generic thrum of machine-generated rhythm. Deeper than punctuation To me, as an emerging writer in the 2010s, the em dash was aspirational. My Lenovo didn't have the symbol on the keyboard, but my Mac made it easier, though still a two-key job. In my delusions, I felt the em dash elevated my writing and set it apart because em dashes were never common in South African writing. Echoing the New York Times style I admired, I'd open with anecdotal leads and pepper the body with em dashes, only to have them stripped out by sub-editors and replaced with spaced hyphens, en dashes or colons and commas. But today, it's a different story. This isn't to say an em dash is a sign of LLM-generated text. But in a South African text, it does make you pause. There are times when it's the only punctuation that truly works for cutting across a thought, or surprising the reader, but when you start seeing it everywhere, it becomes suspicious. And points not just to laziness, but no care. LLMs aren't writers. They're tools. With access to nearly the entire internet, they're brilliant at research, summarising, organising thoughts and even giving technical feedback. But relying on LLMs to write for you, no edits, no effort, is the plastic surgery of writing. 'Basically, AI is a very fancy autocomplete,' It generates responses by predicting the most likely next word based on training data, not by truly understanding meaning. Which means, if you are going to use it, edit. Start with the punctuation then move on to the other tells. First, the clause trio, a rhythm AI loves: three short, punchy phrases or words. Those are usually used to punctuate the 'it's not just …, it's a …' structured sentence. Next, the Oxford comma, a largely American habit, now cropping up in every other LinkedIn post and amapiano press release. Then there's the overly measured tone — serious, but generic. Works best for motivational speakers and, eh, life coaches. Even when humans write this way without the use of AI, sadly, the work just reads … suspect. So what now? Write like a human? Writers are starting to worry. There have been cases where AI filters flag human work as machine-written. X users told one writer to ditch the em dashes, colons and semi-colons altogether. It's Editors already have a list of blacklisted phrases. Now, that list includes the punctuation marks and indicators mentioned above — or your work may be dismissed as synthetic. Worse, when Which brings us to a bigger question: what is AI doing to writing? Experts argue that, outside the generic sentence-stitching, relying on LLMs is detrimental to our thinking and reasoning capacity. 'It turns you from an active seeker of information into a passive consumer of information and I don't know if, in the long term, that is a good shift for us to be making,' says Celia Ford, an American science journalist, in Ford admits that, through technological tools, humans have been doing a lot of 'cognitive off-loading'. She mentions calendar reminders, GPS and even the idea of writing stuff down instead of memorising it. But, there's a caveat. 'When we let LLMs write essays or code for us,' she says, 'we are giving up something that feels, at least to me, pretty central to humanness; critical thinking and creativity, and we are risking letting these tools think for us, instead of aiding us in our own thinking.' Always invite AI to the table However, despite the concerns, the fact is AI is not going anywhere. It may be bad for the environment, but so are fossil fuels and many other technologies we can't live without. In an era when publications are understaffed, leading to minimal time spent on editing drafts, I can't imagine working without Grammarly, which is still a form of AI. It doesn't write, it's not generative like LLMs, it assists, it refines what you've already put on the page. But sometimes it can suck the soul out of your writing. That's when the human brain should take over. But, overall, it helps improve the quality of your draft. Refusing to use AI at all, as noble as it might be, is backward and somewhat masochistic, but also on-brand for humans. All new technologies get criticised by purists. Guns were once seen as cowardly in combat. Cars were dismissed as loud and dangerous by horse riders. Even typewriters were accused of ruining handwriting. How about digitally produced music? We've come a long way. Mollick gets it. 'We have never built a generally applicable technology that can boost our intelligence,' he writes. 'Now humans have access to a tool that can emulate how we think and write, acting as a co‑intelligence to improve (or replace) our work.' He's not afraid of collaboration, however, and he embraces the machine and encourages us to do the same: 'Always invite AI to the table.' 'In field after field,' he writes, 'a human working with an AI co‑intelligence outperforms all but the best humans working without an AI.' Tech changes every art form Technology has transformed all major art forms. In electronic music, computers gave us hi-hats that rattle faster than any drummer could ever play, creating a texture musicians weren't familiar with. What is the world without trap music and EDM? In the 16th century, the camera obscura projected scenes onto a canvas, allowing artists to trace subjects, leading to more immersive art. CGI has made entire universes possible. Marvel's billion-dollar empire couldn't exist without it. Mark Zuckerberg recently said most Meta code will be handled by AI going forward. 'It can run tests, it can find issues, it will write higher quality code than the average very good person on the team already,' Zuck said in a podcast interview with Dwarkesh Patel. Writing: What will AI bring? Maybe speed. Maybe a new kind of prose. Hopefully not just more em dashes in your feed, but a deeper shift in how we think on the page. That's only possible if the human stays in control. If we surrender the process completely, what's left is not writing, just passive copy and paste. But there are more efficient ways to use LLMs. 'It's not that the LLM is giving me the answers,' said David Perell in his visual essay The Ultimate Guide to Writing with AI, 'it's that the LLM is helping me ask good questions … like shining a spotlight in different corners of my brain and helping me find treasure boxes of insight I would've never found on my own.' Stacy Schiff, biographer of There are lines. Where they are drawn is deeply subjective. But, seeing no line at all should be collectively condemned.

Google announces $37m in cumulative funding to advance AI in Africa
Google announces $37m in cumulative funding to advance AI in Africa

The Citizen

time3 days ago

  • The Citizen

Google announces $37m in cumulative funding to advance AI in Africa

Google said Africa's AI talent is growing rapidly, but the infrastructure to support it must grow in tandem. Google has outlined a wave of artificial intelligence (AI) support across Africa, representing $37 million in cumulative funding. The announcement includes previously committed but unannounced funding to research, talent development, and infrastructure. Funding The funding package includes funding and partnerships that aim to strengthen AI research, support African languages, improve food systems, expand digital skills, and build research capacity. Google also announced $3 million in funding to the Masakhane Research Foundation, the open research collective advancing AI tools in over 40 African languages Google said Africa's AI talent is growing rapidly, but the infrastructure to support it must grow in tandem. 'That's why a cornerstone of this announcement is the launch of the AI community centre in Accra — a first-of-its-kind space for AI learning, experimentation, and collaboration in Africa. The centre will host training sessions, community events, and workshops focused on responsible AI development.' ALSO READ: Two decades of Google Maps: How people mapped out their world Four pillars Google said the programming will span four pillars: AI literacy, community technology, social impact, and arts and culture — providing a platform for a diverse ecosystem of developers, students, and creators to engage with AI in ways that are grounded in African priorities. To help meet the rising demand for AI and digital skills, Google is rolling out 100 000 Google Career Certificate scholarships for students in higher learning institutions across Ghana. Beyond Ghana, is committing an additional $7 million to support AI education across Nigeria, Kenya, South Africa, and Ghana. It stated that the funding will support academic institutions and nonprofits in building localised AI curricula, online safety training, and cybersecurity programs. Africa Speaking about the announcements, James Manyika, senior vice president for Research, Labs, and Technology & Society at Google, said Africa is home to some of the most important and inspiring work in AI today. 'We are committed to supporting the next wave of innovation through long-term investment, local partnerships, and platforms that help researchers and entrepreneurs build solutions that matter.' Yossi Matias, vice president of Engineering and Research at Google, added that this new wave of support reflects the company's belief in the talent, creativity, and ingenuity across the continent. 'By building with local communities and institutions, we're supporting solutions that are rooted in Africa's realities and built for global impact.' Initiatives These new initiatives build on Google's ongoing work across the continent. Past efforts have included partnerships to support AI-powered maternal health dashboards in Ghana and Nigeria, as well as wildfire alerts in East Africa, and regional language models developed by teams in Accra and Nairobi. ALSO READ: Google Open Buildings helping strengthen community resilience

Humanoid robots embodiment of China's AI ambitions
Humanoid robots embodiment of China's AI ambitions

eNCA

time3 days ago

  • eNCA

Humanoid robots embodiment of China's AI ambitions

Serving craft beer, playing mahjong, stacking shelves and boxing, the dozens of humanoid robots at Shanghai's World AI Conference (WAIC) this weekend were embodiments of China's growing AI prowess and ambition. The annual event is primed at showcasing China's progress in the ever-evolving field of artificial intelligence, with the government aiming to position the country as a world leader on both technology and regulation as it snaps at the United States' heels. Opening the event on Saturday, Premier Li Qiang announced China would set up a new organisation for cooperation on AI governance, warning the benefits of development must be balanced with the risks. But in the cavernous expo next door, the mood was more giddy than concerned. "Demand is currently very strong, whether in terms of data, scenarios, model training, or artificial construction. The overall atmosphere in all these areas is very lively," said Yang Yifan, R&D director at Transwarp, a Shanghai-based AI platform provider. This year's WAIC is the first since a breakthrough moment for Chinese AI this January when startup DeepSeek unveiled an AI model that performed as well as top US systems for an apparent fraction of the cost. Organisers said the forum involved more than 800 companies, showcasing over 3,000 products -- the undeniable crowd pleasers being the humanoid robots and their raft of slightly surreal party tricks. At one booth, a robot played drums, half a beat out of time, to Queen's "We Will Rock You" while a man in safety goggles and a security vest hyped up a giggling crowd. AFP | - Other droids, some dressed in working overalls or baseball caps, manned assembly lines, played curling with human opponents or sloppily served soft drinks from a dispenser. While most of the machines on display were still a little jerky, the increasing sophistication year-on-year was clear to see. The Chinese government has poured support into robotics, an area in which some experts think China might already have the upper hand over the United States. At Hangzhou-based Unitree's stall, its G1 android -- around 130 centimetres tall, with a two-hour battery life -- kicked, pivoted and punched, keeping its balance with relative fluidity as it shadowboxed around a ring. Ahead of the conference's opening, Unitree announced it would launch a full-size humanoid, the R1, for under $6,000. - 'Digital humans' - Most high-tech helpers don't need hardware though. At the expo, AI companions -- in the form of middle-aged businessmen, scantily clad women and ancient warriors -- waved at people from screens, asking how their day was, while other stalls ran demos allowing visitors to create their own digital avatars. Tech giant Baidu on Saturday announced a new generation of technology for its "digital humans" -- AI agents modelled on real people, which it says are "capable of thinking, making decisions, and collaborating". The company recently ran a six-hour e-commerce broadcast hosted by the "digital human" of a well-known streamer and another avatar. The two agents beat the human streamer's debut sales in some categories, Baidu said. Over ten thousand businesses are using the technology already, the department's head Wu Chenxia told AFP. Asked about the impact on jobs -- one of the major concerns raised around widespread AI adoption -- Wu insisted that AI was a tool that should be used to improve quality and save time and effort, which still required human input. For now, few visitors to the WAIC expo seemed worried about the potential ramifications of the back-flipping dog robots they were excitedly watching. "When it comes to China's AI development, we have a comparatively good foundation of data and also a wealth of application scenarios," said Transwarp's Yang. "There are many more opportunities for experimentation."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store