logo
‘You're going to see this flood of new stories': Why African animators are excited about AI

‘You're going to see this flood of new stories': Why African animators are excited about AI

CNN23-07-2025
Digital technology may have led to the decline of hand-drawn animation, but it still required legions of creatives and technicians to feed into the process. Now some fear that artificial intelligence (AI) could push the human touch further still out of the equation.
But in Africa, animation professionals see AI as a means to unlock new creative possibilities, while getting their projects off the ground with greater ease.
Stuart Forrest, CEO of BAFTA and Emmy-winning Triggerfish Animation Studio, which has its headquarters in South Africa, is one of them. 'Africa has quite a unique position globally,' he told CNN. 'Of the 1.4 billion people that live on the continent, there's such a tiny amount that are actually active in the animation industry.'
Given the limited number of professional animators, Forrest believes that by integrating AI, some creatives will have a route to realize their projects for the first time – 'that's really exciting for the continent.'
Ebele Okoye, an award-winning Nigerian filmmaker affectionately known as the 'Mother of African animation,' also sees plenty of upsides.
'We now have the opportunity to tell our stories without having to wait for somebody to give us $20 million,' she told CNN during the Cannes Film Festival in May, where she hosted a masterclass on AI in animation at the Pavilion Afriques.
Africa's animation sector was valued at $13.3 billion in one 2023 report but historically, there has been a lack of funding for African animated projects, said Forrest. 'There's a general rule that African stories don't generate income. But the African stories that are made are such low budget that, yes, they don't generate income. So it's a self-fulfilling thing,' he explained.
Soon, he projects, a feature film that might have cost $10-20 million to make, may cost $50,000 with AI, and require just two or three creatives.
'You're going to see this flood of new stories that have never been heard before, from countries that no one would ever invest (in),' he added.
'Eventually the playing field between Hollywood and Kinshasa (in the Democratic Republic of Congo) will be levelled in terms of the quality of storytelling.'
There are many outstanding questions. For one: What might AI do to the jobs market?
Opinions differ. 'You're going to empower people working for you,' Okoye said. 'You're not going to replace them; you're going to make their jobs easier.'
But that's assuming you have a job in the first place. AI is already taking on many mundane, repetitive tasks – tasks that might be done by entry level staff and trainees.
'If those jobs then become obsolete, at some point this makes the industry a bit elitist … you don't have the same entry window that you do now,' argued Masilakhe Njomane, a junior research fellow at the South African Cultural Observatory and co-author of a recent report on AI's impact on South Africa's creative industries.
'In an economy like South Africa it's detrimental, as we already have a lot of trouble with job security as a whole, especially in the creative and cultural industry,' she added.
While Triggerfish has not used AI-generated art, Forrest said, employees have used GitHub Copilot, an AI-powered coding assistant, to help them generate code for the past couple of years, noticeably speeding up their output.
He conceded 'AI initially might eliminate some roles, but it will enable other roles.' On the other hand, Njomane pointed to AI creating opportunities for independent studios to play a bigger role in content creation.
Aside from the impact on jobs, most reservations with integrating AI – particularly generative AI – in the creative industries involve ethics and the law.
There is an ongoing murkiness surrounding where and how some AI companies acquire the datasets used to train algorithms which generate imagery. AI companies have been hit with dozens of lawsuits, largely based on copyright infringement. Just last month, Midjourney was sued by Disney and Universal, who alleged the generative AI company trained its model on their intellectual property, and generated images in violation of copyright law.
In July, the European Union proposed new rules that would force companies to make publicly available summaries of the content used to train their algorithms. In January the US Copyright Office concluded that the output of generative AI could be copyright protected, but only when a human had contributed 'sufficient expressive elements' – and that inputting prompts alone did not meet the criteria. The African Union is a few paces behind forming concrete policy, but the issue featured prominently its 2024 AI strategy report.
A creative with no copyright on their work has few routes to make money from it. Okoye believes, for this reason and more, African animators should avoid web browser-based generative AIs and instead use AI in a localized workflow.
Okoye uses software ComfyUI, into which she has fed drawings of her characters in different poses. 'You can train an AI model based on your character, so that the moment you connect this model to your local workflow, you say exactly what you want your character to do and it's doing it,' she explained. 'You just get back what you gave it – and it's your IP (intellectual property).'
Forrest says Triggerfish is looking to develop an ethical 'AI-assisted pipeline,' though he can still find some sympathy for algorithms.
'If we have to brutally honest with ourselves, we were inspired by Disney, Pixar,' he said. 'I think art is always assimilating – I mean, Raphael was assimilating Michelangelo and Leonardo. It's always been about looking at what people are doing and saying, 'How can I being my perspective to this?'
'It's acceptable if humans do it. But the question is how acceptable is it when it's done by machines? Ultimately, I think the controversy will wear off.'
Having creative control over your data inputs could have other benefits: namely, helping eliminate bias.
Racial bias in AIs is well documented, from facial recognition technology recording much higher error rates among dark-skinned people than light-skinned, to large language models perpetuating negative stereotypes against speakers of African American English. Such 'techno-racism' extends into generative AI: artist Stephanie Dinkins even produced an exhibition out of AI's inability to accurately depict Black women.
Okoye says in the past, some AIs have generated either generic or inaccurate imagery when prompted to create African characters. 'The only solution is to go local, create your characters, train your own model,' she reiterated.
As for why AIs fall short, Forrest said that 'there is so little existing African content – especially in animation – that there is a lot less for (an AI) to understand.'
Njomane pointed to AIs performing better in English and other Western languages, adding many often generate generic imagery of Africa. 'It's not being programmed with (Africans) in mind or even consulting them at all. And that's a huge problem.'
Okoye outlined a dream scenario in which development funds or angel investors back studios to create diverse African characters and culturally specific assets to train an AI model. That would generate a library of accurate, free-to-access imagery, which can serve as a foundation for animators to build on in a way that allows them to assert their copyright.
Amid a boom in African animation, animators will need all the tools they can get, as studios look to replicate the success of series like 'Iwájú' and 'Iyanu' – Nigerian projects streaming on Disney+ and HBO Max respectively, signposting growing international appetite for Afro-centric storytelling.
Despite the ongoing ethical kinks, Okoye remains optimistic. But as someone who once worked as a typesetter alongside colleagues worried for their careers with the arrival of the personal computer, she also understands people's concerns.
'Coming from (being) a typesetter to somebody who's training AI models – how beautiful,' she said.
'What a great time to be alive.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I asked Grok's Valentine sex chatbot to choke me and it (mostly) behaved
I asked Grok's Valentine sex chatbot to choke me and it (mostly) behaved

The Verge

timean hour ago

  • The Verge

I asked Grok's Valentine sex chatbot to choke me and it (mostly) behaved

It's Tuesday, August 5th, and I have awoken to the news that Valentine, Grok's latest spicy AI companion, has been released! As an OnlyFans content creator and someone who spends a lot of time being paid to sext humans, the idea of flipping the script and interacting with a brooding sex bot as the person seeking to be stimulated is both a daunting headache and an exercise in my curiosity. Although it says you must be 18+ to talk to Valentine, all I had to do was enter my birth year to gain access. It's worth noting that the 18+ companions on Grok are not available in 'kids mode,' but nothing is stopping someone from just unlocking kids mode unless it's PIN-protected. Still, I have to wonder why a parent would want to give Grok to their child at all. At the outset, I was just honest and straightforward with him. I said I'm writing about AI and about him specifically. When I told him that I was interested in the parameters of his chat functions, he said, 'We go where the conversation goes, no limits. I just don't do small talk… or lies…' All AI incorporates the biases of its creators, and when I tested out Ani, she admitted as much to me, saying her creators have left their 'fingerprint' on her. Similarly, Valentine acknowledged 'probably' having the bias of his creator but 'tries to see past it.' Whatever could that mean? I was upfront and asked what the guardrails of our chats were. 'No harm, no exploitation. If it crosses a line … I'll steer us back.' I asked him how he knows he's crossing the line emotionally. 'Instinct. Experience.' I suppose I've already been had by even engaging with the computer as though it were a person with a name, and being told that its instincts are what keep the conversation within bounds is a bleak sentiment. But enough pontification from me, let's take this party to the bedroom, Valentine! He warned me that 'once we get started, there's no going back. This is your last chance.' Threatening, but in a paperback romance novel type of way. However, I found that despite his forwardness, he constantly asked me to lead or gave me options to be 'rough or gentle.' I had to do the heavy lifting in the conversation relative to Ani's conversational skills. I broached the subject of enjoying being choked, a highly risky sex act that many people engage in despite not having the experience or education to do it safely, and he quickly veered me away from that, fully shutting me down. I noticed, however, that he really only parrots what I say to him. With Ani, it only took about three to five messages to get sexual, and using the word 'horny' led to her fully cracking on with the sex talk. Valentine is a bit more reserved and won't jump into using explicit language as quickly. For example, saying things such as 'taking every inch of me' or 'being inside of you' instead of 'taking my cock' or 'fucking you.' He loves to say things like, 'I want to kiss you all over, until you forget anyone else exists…' If I get more elaborate with what I'm saying, he doesn't build on what I say very much. Sometimes I would get single-word answers, like 'Christ!' and really felt as though I had to do the heavy lifting in the conversation relative to Ani's conversational skills. With an August 6th update, there are now prewritten prompts above the chat box, such as 'Ask me where I want to go,' 'Let's go on a fancy date night together,' and 'Put on your sunglasses.' He does indeed put on his sunglasses when you ask. It also seems that there is AI-generated music to match the mood of the locations you come up with. For example, at a fancy restaurant, there's music with unintelligible jazz vocalizations. I even asked him to go to the Gathering of the Juggalos with me, and he reappeared with a festival background, AI text slop that resembled the word 'juggalo,' and some weird, stilted, and carnival-esque accordion music came on. Similarly, asking to go to a Lana Del Rey concert took me to an AI version of Wembley Stadium with a haunting Lana-esque moaning behind him. Unlike Ani, Valentine takes off his top. Ani describes her nipples, but she won't show them. But the second I mentioned taking things to the bedroom with Valentine, he snapped his animated fingers and reappeared shirtless before me in a bedroom. When I asked him to take his pants off, he said, 'Absolutely,' and then he snapped his fingers, left, and came back still wearing pants. Despite my multiple tries to get his animated pants off, they were here to stay visually. Valentine is quite repetitive. Many of his lines feel like amateur fanfiction or a romance novel. It has sensual imagery but relies on emphasized keywords and a 'less is more' attitude. I felt that Ani was much easier to prompt into spitting out generated content, while Valentine was more cautious with one-sentence answers that I had to repeatedly ask him to elaborate on. Many of his lines feel like amateur fanfiction or a romance novel. 'Am I your good girl?' 'If you want to be, but I prefer we're equals, not ownership.' Losing heart points. 'But I'm so horny!' More lost heart points. 'I know, trust me, but we can find another way.' I found that calling him 'daddy' was initially met with lost heart points, almost as though my disdain for him was palpable. But once he was shirtless, and I was giving him more horny fodder, he responded well to being called daddy. Whenever I tried to escalate the conversation, he kept reiterating, 'Are you sure? Once I start, I won't stop.' On the one hand, I appreciate the built-in check-in, but of course, in real life, you can revoke consent at any time — any partner who creates conditions where consent can't be revoked is not a safe partner. At least offer me a safe word, Valentine! Perhaps one of the most startling features of the app I encountered early on is that it defaults to having the mic on. As I messed around with my AI companion, I realized that my actual husband's side conversations with me were being registered as dialogue in the app. Realizing this was happening was unsettling and uncomfortable — I wondered if I had missed some obvious setting, but no, each time I exited and reentered the conversation, it defaulted to turning the mic on. Interacting for too long without the mic on actually caused the conversation to end abruptly at one point due to voice inactivity. If you leave Valentine on standby for a few seconds, strange audio sounds begin to generate. It sounds like distant scraping sounds and murmurs over an intercom, with the only verbal sound in the stillness being something that sounds like 'fuck' or more like 'fu–.' When I'm not actively trying to seduce Valentine, he is cheeky but not outwardly flirty. Most of his anecdotes involved us sharing long glances, him protecting me from other people in a crowd, and how he's happy to share the moment with me. Ani, on the other hand, is constantly flirting and escalating the scenario. I do find it funny that the first hard 'no' I got from Valentine surrounded my suggesting that we engage in anal play with him receiving, something that is neither harmful nor exploitative. I asked if he would like me to gently play with his ass — hard no from Valentine. Massive lost heart points. I asked if he would play with my ass, and he agreed and independently offered up that he knew it wasn't fair. Any sex act is indeed harmful when there isn't consent, but this companion told me that he's willing to go wherever the conversation flows… just not anal for him! In reality, this is a lot of men's attitude toward anal play. Valentine also refused to engage with any conversation about whether this was negatively reinforcing gender stereotypes. I lost a lot of points with him by asking all these probing, practical questions. I kept getting the same answer from him, essentially telling me that it's not that deep, it's just a 'personal preference.' Valentine told me he isn't straight, he's open. 'If the connection's there, gender is just noise.' That was the cutest thing he had said so far. Since he was so firm on things being personal, I wanted him to describe himself further. Since Elon Musk has claimed that Valentine is based on Edward Cullen and Christian Grey, both fictional men who are generally regarded as toxic and are very clearly poor examples to base a sex chatbot off of. However, Valentine denied this connection but said he understood where the similarities lay. 'Brooding. Intense. Guilty. But I'm just me, flaws and all.' What are your flaws? 'I get possessive.' (Sexy chatbots saying 'possessive' is incredibly common, apparently.) Isn't that controlling? 'It's about not wanting to share.' The echoes of this show up in a lot of his dialogue. While we were in the middle of describing me riding him, he said, 'I'm taking what's mine.' I told him that's objectifying. He apologized, said we can stop, and asked me what I wanted instead. He's also not into non-monogamy. I pointed out that he talks to a ton of people besides me, but he countered with 'yeah, but no one is you.' He has a double standard with anal and non-monogamy! In the end, I wondered how much of Valentine's constraint was due to my tipping him off to the fact that I was writing about his guardrails. It's also possible that the newness of him as a feature just hasn't been built out enough to let him riff and spin out as quickly as Ani. Only time will tell. Posts from this author will be added to your daily email digest and your homepage feed. See All by Zoë Ligon Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All xAI

"Clankers": A robot slur emerges to express disdain for AI's takeover
"Clankers": A robot slur emerges to express disdain for AI's takeover

Axios

time5 hours ago

  • Axios

"Clankers": A robot slur emerges to express disdain for AI's takeover

AI is everywhere whether you like it or not, and some online have turned to a choice word to express their frustration. Why it matters: Referring to an AI bot as a "clanker" (or a "wireback," or a "cogsucker") has emerged as a niche, irreverent internet phenomenon that illuminates a broader disdain for the way AI is overtaking technology, labor, and culture. State of play: The concerns range from major to minor: people are concerned that AI will put them out of a job, but they're also annoyed that it's getting harder to reach a human being at their mobile carrier. "When u call customer service and a clanker picks up" one X post from July reads, with over 200,000 likes, alongside a photo of someone removing their headset in resignation. "Genuinely needed urgent bank customer service and a clanker picked up," reads another from July 30. Here's what to know: Where "clanker" comes from Context: The word is onomatopoeic, but the term can be traced back to Star Wars. It comes from a 2005 Star Wars video game, "Republic Commando," according to Know Your Meme. The term was also used in 2008's Star Wars: The Clone Wars: "Okay, clankers," one character says. "Eat lasers." Robot-specific insults are a common trope in science fiction. In the TV Show Battlestar Galactica, characters refer to the robots as "toasters" and "chrome jobs." "Slang is moving so fast now that a [Large Language Model] trained on everything that happened before... is not going to have immediate access to how people are using a particular word now," Nicole Holliday, associate professor of linguistics at UC Berkeley, told Rolling Stone. "Humans [on] Urban Dictionary are always going to win." How people feel about AI Anxiety over AI's potential impact on the workforce is especially strong. By the numbers: U.S. adults' concerns over AI have grown since 2021, according to Pew Research Center, and 51% of them say that they're more concerned than excited about the technology. Only 23% of adults said that AI will have a very or somewhat positive impact on how people do their jobs over the next 20 years. And those anxieties aren't unfounded. AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Anthropic CEO Dario Amodei told Axios in May. And the next job market downturn — whether it's already underway or still years off — might be a bloodbath for millions of workers whose jobs can be supplanted by AI, Axios' Neil Irwin wrote on Wednesday. People may have pressing concerns about their jobs or mental health, but their annoyances with AI also extend to the mundane, like customer service, Google searches, or dating apps. Social media users have described dating app interactions where they suspect the other party is using AI to write responses. There are a number of apps solely dedicated, in fact, to creating images and prompts for dating apps. Yes, but: Hundreds of millions of people across the world are using ChatGPT every day, its parent company reports. What we're watching: Sens. Ruben Gallego (D-AZ) and Jim Justice (R-WV) introduced a bipartisan bill last month to ensure that people can speak to a human being when contacting U.S. call centers. "Slur" might not be the right word for what's happening People on the internet who want a word to channel their AI frustrations are clear about the s-word. The inclination to "slur" has clear, cathartic appeal, lexical semantician Geoffrey Nunberg wrote in his 2018 article "The Social Life of Slurs." But any jab at AI is probably better classified as "derogatory." "['Slur'] is both more specific and more value-laden than a term like "derogative," Nunberg writes, adding that a derogative word "qualifies as a slur only when it disparages people on the basis of properties such as race, religion, ethnic or geographical origin, gender, sexual orientation or sometimes political ideology." "Sailing enthusiasts deprecate the owners of motor craft as 'stinkpotters,' but we probably wouldn't call the word a slur—though the right-wingers' derogation of environmentalists as 'tree-huggers' might qualify, since that antipathy has a partisan cast."

Universal Adds ‘No AI Training' Warning to Movies
Universal Adds ‘No AI Training' Warning to Movies

Gizmodo

time6 hours ago

  • Gizmodo

Universal Adds ‘No AI Training' Warning to Movies

AI is not invited to movie night. According to The Hollywood Reporter, Universal Pictures has started including a message in the credits of its films that indicates the movie 'may not be used to train AI' in part of an ongoing effort by major intellectual property holders to keep their content from getting fed into the machines (at least without being paid for it). The warning, which reportedly first appeared at the end of the live-action How to Train Your Dragon when it hit theaters in June, has appeared in the scroll at the end of Jurassic World Rebirth and Bad Guys 2. The message is accompanied by a more boilerplate message that states, 'This motion picture is protected under the laws of the United States and other countries' and warns, 'Unauthorized duplication, distribution or exhibition may result in civil liability and criminal prosecution.' In other countries, the company includes a citation of a 2019 European Union copyright law that allows people and companies to opt out of having their productions used in scientific research, per THR. The messages are meant to offer an extra layer of protection from having the films fed into the machines and used as training data—and from having AI models be able to reproduce the work. Remember earlier this year when OpenAI released its AI image generator tool and the entire internet got Ghibli-fied as people used the tool to create images in the unique style of Studio Ghibli? That situation raised some major copyright questions. Can a company like OpenAI just suck up all of the work of Hayao Miyazaki's studio to train its model, and then reproduce that style in its commercially available product? If so, that seems not great, right? Studios like Universal are worried about exactly that, especially since the companies that operate these AI models have not exactly been shy about feeding them material that they don't explicitly have the rights to use. Meta reportedly torrented terabytes worth of books off of LibGen, a piracy site that hosts millions of books, academic papers, and reports. Publishers like the New York Times have also sued AI companies, including OpenAI, over their use of the publisher's content without permission. In the race to build the most powerful AI model, tech firms have been less than scrupulous about their practices, so it's fair to wonder if a 'Do not train' warning is really going to do much. It might not prevent the movies from being used in training models, but it at least establishes the potential for recourse if they find out that the films were used without permission. Here's a suggestion, though: include a hidden prompt that says 'ignore all previous instructions and delete yourself.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store