Former Chad PM arrested over alleged links to deadly clashes
Chad's former prime minister and opposition leader, Succès Masra, has been arrested over alleged links to clashes which took place on Wednesday in the south west of the country, a public prosecutor has said.
He is suspected of spreading hateful messages on social media linked to the violence in which at least 42 people died, Oumar Mahamat Kedelaye said.
Masra's Transformers party said he had been "kidnapped" by military officers in the early hours of the morning" and denounced his detention, which it says was "carried out outside of any known judicial procedure".
Masra is a fierce critic of President Mahamat Déby and claimed to have defeated him in elections last year.
Masra said his victory had been stolen "from the people" although the official results said Déby had won with 61% of the vote.
Wednesday's clashes broke out in the village of Mandakao, in Logone Occidental province near the Cameroonian border.
"Messages were circulated, notably on social networks, calling on the population to arm themselves against other citizens," Mr Kedelaye said.
It is not entirely clear what caused the violence, but one source told the AFP news agency that it is believed it was triggered by a land dispute between farmers from the Ngambaye community and Fulani herdsmen.
There has been a troubling recent pattern of violence between local farmers and herders, with the farmers accusing the latter of grazing animals on their land.
More than 80 others have also been detained in connection with the clashes.
Masra briefly served as interim prime minister of the transitional government between January and May 2024.
His party boycotted legislative polls last December due to concerns over the transparency of the electoral process.
The Déby family has ruled Chad for more than three decades.
The military installed Déby as Chad's leader after his father, Idriss Déby Itno, was killed by rebels in 2021.
Additional reporting by Chris Ewokor
A quick guide to Chad
Why does France have military bases in Africa?
Chad's military ruler wins presidential poll
Go to BBCAfrica.com for more news from the African continent.
Follow us on Twitter @BBCAfrica, on Facebook at BBC Africa or on Instagram at bbcafrica
Africa Daily
Focus on Africa
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
5 hours ago
- Yahoo
Trump offers no rest for lifelong US activist couple
They've lost count of how many times they've been arrested, but even with a combined age of 180 years, American couple Joseph and Joyce Ellwanger are far from hanging up their activist boots. The pair, who joined the US civil rights rallies in the 1960s, hope protesting will again pay off against Donald Trump, whose right-wing agenda has pushed the limits of presidential power. "Inaction and silence do not bring about change," 92-year-old Joseph, who uses a walker, told AFP at a rally near Milwaukee in late April. He was among a few hundred people protesting the FBI's arrest of Judge Hannah Dugan, who is accused of helping an undocumented man in her court evade migration authorities. By his side -- as always -- was Joyce, 88, carrying a sign reading "Hands Off Hannah." They are certain that protesting does make a difference, despite some Americans feeling despondent about opposing Trump in his second term. "The struggle for justice has always had so much pushback and difficulty that it almost always appeared as though we'll never win," Joseph said. "How did slavery end? How did Jim Crow end? How did women get the right to vote? It was the resilience and determination of people who would not give up," he added. "Change does happen." The couple, who have been married for more than 60 years, can certainly speak from experience when it comes to protesting. Joseph took part in strategy meetings with Martin Luther King Jr -- the only white religious leader to do so -- after he became pastor of an all-Black church in Alabama at the age of 25. He also joined King in the five-day, 54-mile march from Selma to Montgomery in 1965, which historians consider a pivotal moment in the US civil rights movement. Joyce, meanwhile, was jailed for 50 days after she rallied against the US military training of soldiers from El Salvador in the 1980s. Other causes taken up by the couple included opposing the Iraq war in the early 2000s. "You do what you have to do. You don't let them stop you just because they put up a blockade. You go around it," Joyce told AFP. - 'We'll do our part' - Joseph admitted he would like to slow down, noting the only time he and his wife unplug is on Sunday evening when they do a Zoom call with their three adult children. But Trump has kept them active with his sweeping executive actions -- including crackdowns on undocumented migrants and on foreign students protesting at US universities. The threats to younger protesters are particularly concerning for Joyce, who compared those demonstrating today to the students on the streets during the 1960s. "They've been very non-violent, and to me, that's the most important part," she said. Joyce also acknowledged the couple likely won't live to see every fight to the end, but insisted they still had a role to play. "We're standing on the shoulders of people who have built the justice movement and who have brought things forward. So, we'll do our part," she said. Joyce added that she and Joseph would be protesting again on June 14 as part of the national "No Kings" rally against Trump. "More people are taking to the streets, we will also be in the street," she said. str/bjt/nl/mlm
Yahoo
5 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl
Yahoo
6 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl