Revolution or risk? How AI is redefining broadcasting and raising red flags
Imagine watching the evening news, only to find out later that the images, voices or even the person reporting were not real.
This is not fiction any more; generative artificial intelligence (GenAI) is transforming the broadcasting industry and not without consequences.
Prof Nelishia Pillay, an AI expert at the University of Pretoria, says while the technology is opening exciting opportunities for content creation, it also raises serious ethical concerns.
"GenAI creates new content based on what it learns from online data," she said. "While it doesn't come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones."
Used widely, generative artificial intelligence (GenAI) has made life easier for broadcasters. Journalists can now create engaging visuals using just voice prompts, producers can create music or video clips in minutes and translate subtitles in different languages in just few clicks. Even converting text to speech using AI also helps broadcasters to do more with fewer resources.
However, with this convenience comes ethical concerns, especially around what is called "deepfakes". These are digitally forged images or videos that can be convincing, posing a threat to truth and trust in the media.
"A challenge that comes with GenAI is how to ensure the ethical use of these tools," she said. "Deepfakes can be used to produce fake news and harmful cybersecurity attacks on businesses."
Pillay also highlighted how tailoring content through AI can reinforce biases if left unchecked.
To address such risks, tools are emerging to detect GenAI misuse. According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news.
Tools like Checkmate, a real-time fact checking system that flags claims in videos and checks them against reliable sources and Turnitin used in the academic world to detect student plagiarism are also evolving.
"Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI," said Pillay.
Beyond fake news, there are deeper ethical questions. Who owns content created by machines? Is it fair to use information from social media platforms to train tools? And the impact of personalised content on audiences.
As AI is making it harder to tell the difference between human and machine creation, media organisations need to come up with clear rules protecting intellectual property and privacy, especially when they use datasets.
"Broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video," Pillay said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

TimesLIVE
10 hours ago
- TimesLIVE
Revolution or risk? How AI is redefining broadcasting and raising red flags
Imagine watching the evening news, only to find out later that the images, voices or even the person reporting were not real. This is not fiction any more; generative artificial intelligence (GenAI) is transforming the broadcasting industry and not without consequences. Prof Nelishia Pillay, an AI expert at the University of Pretoria, says while the technology is opening exciting opportunities for content creation, it also raises serious ethical concerns. "GenAI creates new content based on what it learns from online data," she said. "While it doesn't come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones." Used widely, generative artificial intelligence (GenAI) has made life easier for broadcasters. Journalists can now create engaging visuals using just voice prompts, producers can create music or video clips in minutes and translate subtitles in different languages in just few clicks. Even converting text to speech using AI also helps broadcasters to do more with fewer resources. However, with this convenience comes ethical concerns, especially around what is called "deepfakes". These are digitally forged images or videos that can be convincing, posing a threat to truth and trust in the media. "A challenge that comes with GenAI is how to ensure the ethical use of these tools," she said. "Deepfakes can be used to produce fake news and harmful cybersecurity attacks on businesses." Pillay also highlighted how tailoring content through AI can reinforce biases if left unchecked. To address such risks, tools are emerging to detect GenAI misuse. According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news. Tools like Checkmate, a real-time fact checking system that flags claims in videos and checks them against reliable sources and Turnitin used in the academic world to detect student plagiarism are also evolving. "Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI," said Pillay. Beyond fake news, there are deeper ethical questions. Who owns content created by machines? Is it fair to use information from social media platforms to train tools? And the impact of personalised content on audiences. As AI is making it harder to tell the difference between human and machine creation, media organisations need to come up with clear rules protecting intellectual property and privacy, especially when they use datasets. "Broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video," Pillay said.

IOL News
12 hours ago
- IOL News
Top scientist wants to prevent AI from going rogue
The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Image: RON AI Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks. Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity.

IOL News
20 hours ago
- IOL News
CIMA enhances CGMA Professional Qualification syllabus to align with modern finance
Craig Henery, ACMA, CGMA, CFO at DHL Express South Africa; Tariro Mutizwa, ACMA, CGMA, Vice President at AICPA & CIMA; Catherine Chettiar, FCMA, CGMA, Finance Senior Manager for Business Reform at Toyota South Africa; Stephen Flatman, Vice President of Education and Professional Qualifications, Management Accounting at AICPA & CIMA; and Grant Driver, Founder of Captivate International. Image: Supplied The Chartered Institute of Management Accountants (CIMA) has unveiled its CGMA Professional Qualification syllabus for 2026. This upgrade is designed to prepare students for the future of work and meet the growing expectations of modern finance professionals to be high-performance finance business partners capable of utilising technology and applying critical thinking skills. The emphasis on finance business partnering and applied problem solving aligns with AICPA and CIMA's Future of Finance 2.0 research and the World Economic Forum's Future of Jobs Report 2025, which predict that combining technology and human insight will be crucial for career success in the knowledge economy and essential for future accounting and finance roles. The upgraded syllabus enhances key competencies and behaviours such as finance business partnering, analytical thinking, and strategic planning while broadening its scope to include sustainability (e.g. green finance, environmental costing, and disclosures under IFRS S1/S2) and cutting-edge financial technologies (GenAI) content. "With a focus on finance role simulations embedded in our Case Study exams, the CGMA Professional Qualification allows finance professionals to quickly develop and apply cognitive, digital, and technical skills needed as finance business partners. Our unique problem-solving educational approach helps them provide expert advice, support decision-making, and create value for organisations." commented Stephen Flatman, Vice President of Education and Professional Qualifications, Management Accounting at AICPA and CIMA. 'This year's update to the CGMA Professional Qualification syllabus sets it apart from traditional accounting and finance education, which still focuses heavily on preparing information, controls, and compliance – tasks increasingly automated by technology. The CGMA Professional Qualification is designed for the future of finance; created by finance professionals to equip future finance professionals with skills they need to be value creators,' added Andrew Harding, FCMA, CGMA, Chief Executive – Management Accounting at AICPA and CIMA. 'The updated CGMA Professional Qualification syllabus represents a significant advancement in the development of finance and accounting professionals across Africa. By integrating key competencies such as business partnering, analytical thinking, strategic planning, and sustainability, it ensures that our members and candidates are equipped with the expertise required to navigate the complexities of the modern business landscape. This evolution aligns with global industry demands, reinforcing our commitment to fostering highly skilled professionals who drive economic growth and corporate resilience' said Tariro Mutizwa, ACMA, CGMA, Vice President, Africa at AICPA and CIMA. These changes do not impact learners taking CGMA exams in 2025. A Case Study and Objective Tests study support materials pack will be launched in October to help students prepare for the May 2026 CGMA exams. CIMA has also created over 50 hours of free study support materials to help candidates progress through exams and their journey to becoming CGMA designation holders. Full details can be found on the CIMA website.