
AI didn't kill writing — we did
Being a grammar snob is so 2012 but I'm probably not alone in this one. Seeing the em dash (—) go mainstream wasn't on the cards for many years. But now, it's everywhere, tucked into every second article, LinkedIn sermon and long-form X post that wants to sound groundbreaking but falls flat.
It wasn't always like this. Outside of yellowing novel pages, em dashes, especially those without spaces on either side, were prevalent in prestige American texts. They belonged in literary-leaning publications like The Atlantic, Vanity Fair, The New York Times, Pitchfork… publications with writers and editors who know how to break rhythm with elegance for a readership that gets it.
Now the em dash has gone corporate. The issue is not that it's going mainstream, but how it's being used.
Blame the machine
It's ChatGPT's fault. And that of all other large language models (LLMs) like Google's Gemini and Elon Musk's Grok. For better or worse, LLMs are widely used in writing today. But, while they can assist, most LLM-generated text lacks that personal touch. It reads broad and glossy. Emotionally neutral and safe.
You can often tell when a piece of writing has been LLM generated, not just by its tone, but by its punctuation. The em dash appears like a glitch in the code, levitating where a comma, semicolon, colon or even just a space would've worked better. It's like narrative duct tape — functional but overused.
And yet, humans are letting it slide. Or worse, they are copying the style without realising why.
The popularity of AI in SA
ChatGPT is the fifth-most visited website in South Africa, after Google, YouTube, Facebook and, sadly, Hollywoodbets. Globally, it received 4.7 billion visits in April alone, up 51% from just two months earlier. In the AI search space, ChatGPT now commands over 80% of the traffic.
People are using it to write everything from school essays to corporate blogs and press releases. Financial challenges, rife in the media space all over the world, are pushing publications to quietly consider the use of AI-generated articles. I've heard the whispers and read the actual texts.
One never knows the prompts people are feeding LLMs, but the output speaks volumes. Whole articles, captions and bios that sound templated. You can feel the generic thrum of machine-generated rhythm.
Deeper than punctuation
To me, as an emerging writer in the 2010s, the em dash was aspirational. My Lenovo didn't have the symbol on the keyboard, but my Mac made it easier, though still a two-key job. In my delusions, I felt the em dash elevated my writing and set it apart because em dashes were never common in South African writing.
Echoing the New York Times style I admired, I'd open with anecdotal leads and pepper the body with em dashes, only to have them stripped out by sub-editors and replaced with spaced hyphens, en dashes or colons and commas.
But today, it's a different story.
This isn't to say an em dash is a sign of LLM-generated text. But in a South African text, it does make you pause. There are times when it's the only punctuation that truly works for cutting across a thought, or surprising the reader, but when you start seeing it everywhere, it becomes suspicious. And points not just to laziness, but no care.
LLMs aren't writers. They're tools. With access to nearly the entire internet, they're brilliant at research, summarising, organising thoughts and even giving technical feedback. But relying on LLMs to write for you, no edits, no effort, is the plastic surgery of writing.
'Basically, AI is a very fancy autocomplete,'
It generates responses by predicting the most likely next word based on training data, not by truly understanding meaning. Which means, if you are going to use it, edit.
Start with the punctuation then move on to the other tells.
First, the clause trio, a rhythm AI loves: three short, punchy phrases or words. Those are usually used to punctuate the 'it's not just …, it's a …' structured sentence.
Next, the Oxford comma, a largely American habit, now cropping up in every other LinkedIn post and amapiano press release.
Then there's the overly measured tone — serious, but generic. Works best for motivational speakers and, eh, life coaches.
Even when humans write this way without the use of AI, sadly, the work just reads … suspect.
So what now? Write like a human?
Writers are starting to worry. There have been cases where AI filters flag human work as machine-written. X users told one writer to ditch the em dashes, colons and semi-colons altogether.
It's
Editors already have a list of blacklisted phrases. Now, that list includes the punctuation marks and indicators mentioned above — or your work may be dismissed as synthetic. Worse, when
Which brings us to a bigger question: what is AI doing to writing?
Experts argue that, outside the generic sentence-stitching, relying on LLMs is detrimental to our thinking and reasoning capacity.
'It turns you from an active seeker of information into a passive consumer of information and I don't know if, in the long term, that is a good shift for us to be making,' says Celia Ford, an American science journalist, in
Ford admits that, through technological tools, humans have been doing a lot of 'cognitive off-loading'. She mentions calendar reminders, GPS and even the idea of writing stuff down instead of memorising it.
But, there's a caveat.
'When we let LLMs write essays or code for us,' she says, 'we are giving up something that feels, at least to me, pretty central to humanness; critical thinking and creativity, and we are risking letting these tools think for us, instead of aiding us in our own thinking.'
Always invite AI to the table
However, despite the concerns, the fact is AI is not going anywhere. It may be bad for the environment, but so are fossil fuels and many other technologies we can't live without.
In an era when publications are understaffed, leading to minimal time spent on editing drafts, I can't imagine working without Grammarly, which is still a form of AI. It doesn't write, it's not generative like LLMs, it assists, it refines what you've already put on the page. But sometimes it can suck the soul out of your writing. That's when the human brain should take over. But, overall, it helps improve the quality of your draft.
Refusing to use AI at all, as noble as it might be, is backward and somewhat masochistic, but also on-brand for humans. All new technologies get criticised by purists. Guns were once seen as cowardly in combat. Cars were dismissed as loud and dangerous by horse riders. Even typewriters were accused of ruining handwriting. How about digitally produced music? We've come a long way.
Mollick gets it. 'We have never built a generally applicable technology that can boost our intelligence,' he writes. 'Now humans have access to a tool that can emulate how we think and write, acting as a co‑intelligence to improve (or replace) our work.'
He's not afraid of collaboration, however, and he embraces the machine and encourages us to do the same: 'Always invite AI to the table.'
'In field after field,' he writes, 'a human working with an AI co‑intelligence outperforms all but the best humans working without an AI.'
Tech changes every art form
Technology has transformed all major art forms. In electronic music, computers gave us hi-hats that rattle faster than any drummer could ever play, creating a texture musicians weren't familiar with. What is the world without trap music and EDM?
In the 16th century, the camera obscura projected scenes onto a canvas, allowing artists to trace subjects, leading to more immersive art.
CGI has made entire universes possible. Marvel's billion-dollar empire couldn't exist without it.
Mark Zuckerberg recently said most Meta code will be handled by AI going forward. 'It can run tests, it can find issues, it will write higher quality code than the average very good person on the team already,' Zuck said in a podcast interview with Dwarkesh Patel.
Writing: What will AI bring?
Maybe speed. Maybe a new kind of prose. Hopefully not just more em dashes in your feed, but a deeper shift in how we think on the page.
That's only possible if the human stays in control. If we surrender the process completely, what's left is not writing, just passive copy and paste.
But there are more efficient ways to use LLMs. 'It's not that the LLM is giving me the answers,' said David Perell in his visual essay The Ultimate Guide to Writing with AI, 'it's that the LLM is helping me ask good questions … like shining a spotlight in different corners of my brain and helping me find treasure boxes of insight I would've never found on my own.'
Stacy Schiff, biographer of
There are lines. Where they are drawn is deeply subjective. But, seeing no line at all should be collectively condemned.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
41 minutes ago
- IOL News
AI search pushing an already weakened media ecosystem to the brink
Generative AI tools like ChatGPT are reducing traditional online search traffic, leading to fewer visitors to news websites. This decline is hurting advertising revenue, delivering a serious setback to an already struggling news industry. Image: AFP Generative artificial intelligence assistants like ChatGPT are cutting into traditional online search traffic, depriving news sites of visitors and impacting the advertising revenue they desperately need, in a crushing blow to an industry already fighting for survival. "The next three or four years will be incredibly challenging for publishers everywhere. No one is immune from the AI summaries storm gathering on the horizon," warned Matt Karolian, vice president of research and development at Boston Globe Media. "Publishers need to build their own shelters or risk being swept away." While data remains limited, a recent Pew Research Center study reveals that AI-generated summaries now appearing regularly in Google searches discourage users from clicking through to source articles. When AI summaries are present, users click on suggested links half as often compared to traditional searches. This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions. According to Northeastern University professor John Wihbey, these trends "will accelerate, and pretty soon we will have an entirely different web." The dominance of tech giants like Google and Meta had already slashed online media advertising revenue, forcing publishers to pivot toward paid subscriptions. But Wihbey noted that subscriptions also depend on traffic, and paying subscribers alone aren't sufficient to support major media organizations. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Limited lifelines The Boston Globe group has begun seeing subscribers sign up through ChatGPT, offering a new touchpoint with potential readers, Karolian said. However, "these remain incredibly modest compared to other platforms, including even smaller search engines." Other AI-powered tools like Perplexity are generating even fewer new subscriptions, he added. To survive what many see as an inevitable shift, media companies are increasingly adopting GEO (Generative Engine Optimization) -- a technique that replaces traditional SEO (Search Engine Optimization). This involves providing AI models with clearly labeled content, good structure, comprehensible text, and strong presence on social networks and forums like Reddit that get crawled by AI companies. But a fundamental question remains: "Should you allow OpenAI crawlers to basically crawl your website and your content?" asks Thomas Peham, CEO of optimization startup OtterlyAI. Burned by aggressive data collection from major AI companies, many news publishers have chosen to fight back by blocking AI crawlers from accessing their content. "We just need to ensure that companies using our content are paying fair market value," argued Danielle Coffey, who heads the News/Media Alliance trade organization. Some progress has been made on this front. Licensing agreements have emerged between major players, such as the New York Times and Amazon, Google and Associated Press, and Mistral and Agence France-Presse, among others. But the issue is far from resolved, as several major legal battles are underway, most notably the New York Times' blockbuster lawsuit against OpenAI and Microsoft. Let them crawl Publishers face a dilemma: blocking AI crawlers protects their content but reduces exposure to potential new readers. Faced with this challenge, "media leaders are increasingly choosing to reopen access," Peham observed. Yet even with open access, success isn't guaranteed. According to OtterlyAI data, media outlets account for just 29% of the citations provided by ChatGPT, trailing corporate websites at 36%. And while Google search has traditionally privileged sources recognized as reliable, "we don't see this with ChatGPT," Peham noted. The stakes extend beyond business models. According to the Reuters Institute's 2025 Digital News Report, about 15% of people under 25 now use generative AI to get their news. Given ongoing questions about AI sourcing and reliability, this trend risks confusing readers about information origins and credibility -- much like social media did before it. "At some point, someone has to do the reporting," Karolian said. "Without original journalism, none of these AI platforms would have anything to summarize." Perhaps with this in mind, Google is already developing partnerships with news organizations to feed its generative AI features, suggesting potential paths forward. "I think the platforms will realize how much they need the press," predicted Wihbey -- though whether that realization comes soon enough to save struggling newsrooms remains an open question. AFP


Daily Maverick
4 days ago
- Daily Maverick
New worlds unfolding — how Thekiso Mokhele uses AI to reshape artistic labour
Thekiso Mokhele has found a creative toolbox and is pioneering new territory for art-making. As artificial intelligence (AI) technologies reshape the boundaries of art, Thekiso Mokhele's work pushes into the speculative unknown. This is one of the reasons his art was selected by the WORLDART Gallery for the HEAT Winter Arts Festival, which takes place from 6 to 16 August and is centred on the theme, 'Other Worlding'. Blending his foundation in photography with new digital tools, Mokhele uses AI to imagine futures rooted in African narratives, memory and myth. At a recent art fair, he presented works made with AI. His marginal position was reflected in the display as his work occupied a small corner where he and another artist had works placed. 'This is where it starts,' he said. 'In five or 10 years, I'll look back and remember the exact corner.' For Mokhele, it was not about being first, it was about standing on the edge of what art might become. Mokhele is an artist based in Johannesburg, and his work with AI is fairly recent. He has a background in photography, and his artistic journey began long before the current wave of generative AI tools such as Midjourney or ChatGPT became part of today's creative toolbox. He was already digging through the deep web for tools he could use to help him see what didn't yet exist. He was looking for something that could not be done by a camera. 'I have always had an eye,' he says, describing the visual sensibility as nuances honed through the eye behind the camera. He began experimenting by testing how prompts could be shaped and worked, and this evolved with his curiosity. He came to recognition for a series of AI images, titled The Rumbling. These images were created in response to the methane gas leak explosion in the Johannesburg CBD on 19 July 2023. Inspired by the chaos, Mokhele used the destructive images to start a conversation among people about the state of South Africa's infrastructure. He often returns to this idea of curiosity of play and building. 'I want to create pieces that are going to speak to people… in a way that is deeper than just normal visuals that you'd expect. I want them to tell a story.' His workflow is complex and multilayered. Every image begins with intention, a concept, mood and composition. He constructs a vision using every tool at his disposal: photography, references, detailed prompt engineering and editing. He often uses his own photos as raw material. These are images he has shot, directed and styled, sampling and transforming them through the AI lens. Lighting, texture, framing, tone. They are intuitive but also learned. Years of working with a camera taught him how a photo should look, where the shadows should fall, and how the human body communicates through gestures and poses. These instincts guide the way he writes prompts, refines outputs and chooses what not to use. Because AI art is disruptive, it is controversial. It is threatening the status quo. It blurs the line between machine and maker, between invention and replication. And in doing so, it opens deep, uncomfortable questions: Who owns an image? What counts as authorship? Is creativity still creative if it is shared with a machine? We are not speculating anymore about it. It's here. Mokhele does not claim to have all the answers. He speaks often about how images don't make themselves. 'AI itself is another dimension because it references our world but creates what it interprets as us.' For him, AI is a tool and it is not here to erase artistic labour, but to reshape it. This has made him more aware of what he brings to the process – not less. His background in photography informs how he prompts. Every decision, every edit, every prompt is part of his intention. It is a hybrid process. One that relies on an understanding of code and colour, composition and command lines. Mokhele is still learning what the process is. Mokhele's work fits here because it is speculative. It imagines new worlds and challenges how we define what's real. He is not claiming to lead a movement, but he is claiming his space in it. His art exists in a moment that is still unfolding. 'Futurism is a duality of what we believe we're living in now and what the future will be,' he says. In many ways, his work is already part of an answer. He is not using AI to run away from art but to build on its existing characteristics. This fusion challenges the potential of African artistic production beyond colonial expectations and rigid traditions. To create work that lives in speculative worlds, future landscapes and imagined timelines. Work that is rooted in photography, but also in play, experimentation and risk. DM A longer version of this article will appear in HEAT: Emerging Artists You Should Know 2025. Mokhele is showing at WORLDART Gallery's HEAT Festival exhibition titled Technology as Palette: Imagination to Image. Meet the artist during a curator-led walkabout on 7 August at 5pm at Alliance Francaise. Visit to book and find out more about the festival.


Mail & Guardian
4 days ago
- Mail & Guardian
The silent thief: AI exploits creators under the guise of innovation
As we ride the wave of technological advancement, we must ensure that innovation does not come at the cost of exploitation. As artificial intelligence (AI) continues to astonish the world with its capabilities, from writing articles and generating images to composing music and producing reports, there is an urgent, overlooked reality that demands our attention — the silent, systematic exploitation of intellectual property by AI systems. While society celebrates innovation, many creators remain muted, their voices drowned out by the roar of technological progress. Their books, music, artwork and more are being used to train machine learning models; the data informs the patterns the algorithms learn, often without the creators' consent, credit or compensation. Behind the promise of technological advancement is a quiet but pervasive form of abuse: AI masquerades as innovation. The legal, ethical and cultural implications of AI unchecked require urgent policy responses. Generative AI systems, such as large language models (LLMs) and image generators, rely on data, much of which is derived from human-created books, articles and artworks. Most of these systems are trained on large datasets containing copyright content scraped from the internet, including subscription-based platforms and pirated sources. Although this is done under the legal doctrine of 'fair use', which is peculiar to the United States, the fairness of that usage is indeed questionable. When a creator's life work is repurposed to drive a billion dollar AI enterprise without their awareness or permission, this raises serious concerns of intellectual property (copyright) infringement. Recent legal battles in the US have brought this issue to the forefront. Authors, including David Baldacci and John Grisham, have acted against OpenAI for using their books in training datasets. The plaintiffs allege that OpenAI copied their works wholesale, without permission. As of now, the case remains unresolved, but it has already sparked global debate about ownership, consent, and compensation in the AI era. It is commendable that countries in the European Union have resorted to making use of the 'Opt-in' system. The European Union's General Data Protection Regulation, or GDPR, is a prime example of the opt-in consent regime. The DPR requires a data subject's consent to be freely given, specific, informed and unambiguous. It is a framework that contrasts sharply with the 'opt-out' model, which treats silence as consent (European Commission, 2023). The EU's approach affirms the creator's right to decide how their work is used. This model offers a compelling blueprint that African countries should seriously consider adopting. Africa's creative industries, from our musicians and poets to fashion designers and filmmakers are unique and increasingly recognised on the global stage. Yet, they remain underprotected. We lack comprehensive AI policies, and enforcement of our copyright laws is weak. If we do not act now, our artists' voices may be digitised, globalised and monetised without them ever knowing or benefiting. We must demand and get involved in making sure that AI systems trained on African content ensure transparency, compensation and consent. Our lawmakers should champion an 'opt-in' regime that aligns with ethical standards being proposed in other parts of the world. If African creativity is valuable enough to train billion-dollar platforms, then it is valuable enough to protect. This issue is not only legal, it is ethical. Creativity is not merely data. Every poem, painting or photograph represents hours of human thought, feeling and labour. To treat such expressions as mere raw material for machines, without recognition or reward, is to devalue the soul of human creativity. Africa, often excluded from global intellectual property conversations, must not remain silent. Our policymakers must strengthen copyright laws, create ethical frameworks for AI development and prevent the exploitation of African content by international tech firms. To strike a balance between AI innovation and intellectual property protection, clear legal frameworks that promote responsible AI development while safeguarding creators' rights must be developed. This includes transparent licencing systems such as opt-in or opt-out mechanisms for the use of copyrighted content in training datasets; mandating disclosure of data sources; and creating fair compensation models for creators. Yes, AI can empower us but only if it respects the very people who make creativity possible. As we ride the wave of technological advancement, we must ensure that innovation does not come at the cost of exploitation. Rachelle Anesu Chaminuka is a legal professional with expertise in entrepreneurship and intellectual property.