3 days ago
Revolution or risk? How AI is redefining broadcasting and raising red flags
Imagine watching the evening news, only to find out later that the images, voices or even the person reporting were not real.
This is not fiction any more; generative artificial intelligence (GenAI) is transforming the broadcasting industry and not without consequences.
Prof Nelishia Pillay, an AI expert at the University of Pretoria, says while the technology is opening exciting opportunities for content creation, it also raises serious ethical concerns.
"GenAI creates new content based on what it learns from online data," she said. "While it doesn't come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones."
Used widely, generative artificial intelligence (GenAI) has made life easier for broadcasters. Journalists can now create engaging visuals using just voice prompts, producers can create music or video clips in minutes and translate subtitles in different languages in just few clicks. Even converting text to speech using AI also helps broadcasters to do more with fewer resources.
However, with this convenience comes ethical concerns, especially around what is called "deepfakes". These are digitally forged images or videos that can be convincing, posing a threat to truth and trust in the media.
"A challenge that comes with GenAI is how to ensure the ethical use of these tools," she said. "Deepfakes can be used to produce fake news and harmful cybersecurity attacks on businesses."
Pillay also highlighted how tailoring content through AI can reinforce biases if left unchecked.
To address such risks, tools are emerging to detect GenAI misuse. According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news.
Tools like Checkmate, a real-time fact checking system that flags claims in videos and checks them against reliable sources and Turnitin used in the academic world to detect student plagiarism are also evolving.
"Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI," said Pillay.
Beyond fake news, there are deeper ethical questions. Who owns content created by machines? Is it fair to use information from social media platforms to train tools? And the impact of personalised content on audiences.
As AI is making it harder to tell the difference between human and machine creation, media organisations need to come up with clear rules protecting intellectual property and privacy, especially when they use datasets.
"Broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video," Pillay said.