Second suspected sabotage in France as power cut hits Nice
A second power outage in two days his hit the French Riviera region after a fire at a substation in Nice overnight, which authorities said was caused by a malicious act.
At least 45,000 homes were affected after the blaze broke out at around 02:00 local time (01:00 BST) on Sunday, a day after nearby Cannes suffered a massive blackout that was blamed on suspected sabotage.
Police in Nice say "tire tracks" were found and the door to the substation, in the west of the city, was "broken", according to local media reports.
Nice Airport, the tramway network, and neighbouring towns of Saint-Laurent-du-Var and Cagnes-sur-Mer, were impacted before power was restored later in the morning.
Nice's mayor Christian Estrosi said on X that he "strongly denounced" the "malicious acts that affect our country".
The city's deputy mayor, Gaël Nofri, said the substation fire was "probably of criminal origin".
It came a day after Cannes suffered a major blackout during the international film festival. Officials said it may have been caused by an arson attack on a substation.
Around 160,000 homes in the city and surrounding areas lost power.
Several screenings were interrupted by the power cut in the morning, before festival organisers were able to switch to private generators.
At the moment, no link has been established between the two incidents.
Estrosi said authorities would reinforce the security network around the Nice's electric sites. An investigation into "organised arson" has been opened.
Nice prosecutor Damien Martinelli was quoted by AFP news agency as saying investigations were underway, in particular "to clarify the damage and the manner in which the act was committed".

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
7 hours ago
- Yahoo
Trump offers no rest for lifelong US activist couple
They've lost count of how many times they've been arrested, but even with a combined age of 180 years, American couple Joseph and Joyce Ellwanger are far from hanging up their activist boots. The pair, who joined the US civil rights rallies in the 1960s, hope protesting will again pay off against Donald Trump, whose right-wing agenda has pushed the limits of presidential power. "Inaction and silence do not bring about change," 92-year-old Joseph, who uses a walker, told AFP at a rally near Milwaukee in late April. He was among a few hundred people protesting the FBI's arrest of Judge Hannah Dugan, who is accused of helping an undocumented man in her court evade migration authorities. By his side -- as always -- was Joyce, 88, carrying a sign reading "Hands Off Hannah." They are certain that protesting does make a difference, despite some Americans feeling despondent about opposing Trump in his second term. "The struggle for justice has always had so much pushback and difficulty that it almost always appeared as though we'll never win," Joseph said. "How did slavery end? How did Jim Crow end? How did women get the right to vote? It was the resilience and determination of people who would not give up," he added. "Change does happen." The couple, who have been married for more than 60 years, can certainly speak from experience when it comes to protesting. Joseph took part in strategy meetings with Martin Luther King Jr -- the only white religious leader to do so -- after he became pastor of an all-Black church in Alabama at the age of 25. He also joined King in the five-day, 54-mile march from Selma to Montgomery in 1965, which historians consider a pivotal moment in the US civil rights movement. Joyce, meanwhile, was jailed for 50 days after she rallied against the US military training of soldiers from El Salvador in the 1980s. Other causes taken up by the couple included opposing the Iraq war in the early 2000s. "You do what you have to do. You don't let them stop you just because they put up a blockade. You go around it," Joyce told AFP. - 'We'll do our part' - Joseph admitted he would like to slow down, noting the only time he and his wife unplug is on Sunday evening when they do a Zoom call with their three adult children. But Trump has kept them active with his sweeping executive actions -- including crackdowns on undocumented migrants and on foreign students protesting at US universities. The threats to younger protesters are particularly concerning for Joyce, who compared those demonstrating today to the students on the streets during the 1960s. "They've been very non-violent, and to me, that's the most important part," she said. Joyce also acknowledged the couple likely won't live to see every fight to the end, but insisted they still had a role to play. "We're standing on the shoulders of people who have built the justice movement and who have brought things forward. So, we'll do our part," she said. Joyce added that she and Joseph would be protesting again on June 14 as part of the national "No Kings" rally against Trump. "More people are taking to the streets, we will also be in the street," she said. str/bjt/nl/mlm
Yahoo
7 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl
Yahoo
7 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl