logo
Grok's ‘white genocide' misinformation shows need for African-tailored AI algorithms

Grok's ‘white genocide' misinformation shows need for African-tailored AI algorithms

Mail & Guardian24-05-2025

X marks the spot: Elon Musk's company xAI said a modification by an employee had caused Grok to accept 'white genocide' as a fact.
Most artificial intelligence bots are trained outside the continent, raising questions of bias
This content is restricted to
subscribers only
.
Join the M&G Community
Our commitment at the Mail & Guardian is to ensure every reader enjoys the finest experience. Join the M&G community and support us in delivering in-depth news to you consistently.
Subscription enables:
- M&G community membership
- independent journalism
- access to all premium articles & features
- a digital version of the weekly newspaper
- invites to subscriber-only events
- the opportunity to test new online features first
Already a subscriber?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Revolution or risk? How AI is redefining broadcasting and raising red flags
Revolution or risk? How AI is redefining broadcasting and raising red flags

TimesLIVE

time5 hours ago

  • TimesLIVE

Revolution or risk? How AI is redefining broadcasting and raising red flags

Imagine watching the evening news, only to find out later that the images, voices or even the person reporting were not real. This is not fiction any more; generative artificial intelligence (GenAI) is transforming the broadcasting industry and not without consequences. Prof Nelishia Pillay, an AI expert at the University of Pretoria, says while the technology is opening exciting opportunities for content creation, it also raises serious ethical concerns. "GenAI creates new content based on what it learns from online data," she said. "While it doesn't come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones." Used widely, generative artificial intelligence (GenAI) has made life easier for broadcasters. Journalists can now create engaging visuals using just voice prompts, producers can create music or video clips in minutes and translate subtitles in different languages in just few clicks. Even converting text to speech using AI also helps broadcasters to do more with fewer resources. However, with this convenience comes ethical concerns, especially around what is called "deepfakes". These are digitally forged images or videos that can be convincing, posing a threat to truth and trust in the media. "A challenge that comes with GenAI is how to ensure the ethical use of these tools," she said. "Deepfakes can be used to produce fake news and harmful cybersecurity attacks on businesses." Pillay also highlighted how tailoring content through AI can reinforce biases if left unchecked. To address such risks, tools are emerging to detect GenAI misuse. According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news. Tools like Checkmate, a real-time fact checking system that flags claims in videos and checks them against reliable sources and Turnitin used in the academic world to detect student plagiarism are also evolving. "Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI," said Pillay. Beyond fake news, there are deeper ethical questions. Who owns content created by machines? Is it fair to use information from social media platforms to train tools? And the impact of personalised content on audiences. As AI is making it harder to tell the difference between human and machine creation, media organisations need to come up with clear rules protecting intellectual property and privacy, especially when they use datasets. "Broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video," Pillay said.

What happens if Afrikaner 'refugees' want to come home?
What happens if Afrikaner 'refugees' want to come home?

The South African

time6 hours ago

  • The South African

What happens if Afrikaner 'refugees' want to come home?

A second batch of Afrikaner 'refugees' have settled into the US, and thousands more are expected to arrive. However, with South African-born Elon Musk's ongoing feud with President Donald Trump, concern is mounting among those who now call America home. More importantly, will they be allowed to return to South Africa? It's no secret that President Donald Trump's view of South Africa was influenced by his former right-hand man, Elon Musk. Born in Pretoria, and having left at just 17, the world's richest man has actively accused his country of birth of promoting 'white genocide', 'farm confiscation,' and 'racist laws'. With Trump now threatening to cut all government contracts to Musk, and possibly distance himself from him altogether, what does this mean for the Afrikaner 'refugees' already in the US? What does Elon Musk's rant mean for Afrikaner 'refugees'? Images via X The group of white South Africans was offered asylum based on proving their fear of persecution on the grounds of race, religion, nationality, political opinion, or membership in a particular social group. Under US and international immigration laws, refugees who have received asylum cannot simply be deported to their country of persecution. Legislation also vigorously protects their rights to stay in their new adoptive country. However, their status can be terminated should they be found guilty of serious crimes or if they fraudulently sought asylum. According to the US Citizenship and Immigration Services (USCIS), there is a strict vetting process for those who seek refugee status in the country. Once resettled in the US, refugees are permitted to work immediately. They will be eligible for a green card a year later. However, there are also stringent travel restrictions in place. The USCIS states that all refugees must obtain permission via a travel document before they depart the US. Failure to do so could result in asylees being denied re-entry into the US. These actions could also strip asylees of their refugee status, resulting in removal proceedings before an immigration judge. In the case of Afrikaners, returning to South Africa – considered the country of 'persecution' – could come at a cost. Here is how… Refugees who visit SA without prior permission shall be presumed to have abandoned their asylum unless they can establish compelling reasons for their return. Refugees are expected to explain the reason for their return in the process of obtaining permission. Returning to one's country of claimed persecution could result in termination of asylum if there are fundamental changes in circumstances back home. Termination could also occur due to fraud in the asylum application. This happens if an applicant is not eligible for refugee status. Returning to SA for business or pleasure can be considered evidence that the refugee's alleged fear of persecution is not genuine. For Afrikaner 'refugees' hoping to return home willingly for good, voluntary repatriation is always an option. Organisations like the UNHCR help those who wish to travel back to their countries by providing financial assistance. According to the South African Department of International Relations, 'refugees' would be welcomed back as citizens. However, they would have to revoke their status in order for this to happen. Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1 . Subscribe to The South African website's newsletters and follow us on WhatsApp , Facebook , X , and Bluesky for the latest news.

Top scientist wants to prevent AI from going rogue
Top scientist wants to prevent AI from going rogue

IOL News

time7 hours ago

  • IOL News

Top scientist wants to prevent AI from going rogue

The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Image: RON AI Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks. Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store