
Daughter of crypto boss escapes Paris kidnap attempt in latest in series of attacks
Armed assailants tried to kidnap the daughter and grandson of a French cryptocurrency boss in Paris, police said, in a brazen daytime attack that was caught on camera.
Tuesday morning's attack in Paris's 11th district is the latest in a string of violent incidents targeting figures in France's burgeoning crypto industry.
Four masked men attacked the daughter, her partner and their child in the French capital, police sources told French news agency Agence France-Presse (AFP).
Video footage shows three masked men jump out of a white van. The woman and her partner fight the attackers and loud screams for help are heard. Speaking to BFMTV, one witness said the assailants tried to 'pull a young woman by force' into the waiting vehicle.
The woman can be seen grabbing a gun off one of the masked men and throwing it into the street. The weapon, which was later recovered from the scene, turned out to be a fake, sources told BFMTV.
The screams attract the attention of passersby, who intervene, one of them armed with a fire extinguisher.
'I saw passersby saying to stop. A man went out into the street with a fire extinguisher to try to make these people leave,' a witness told French broadcaster BFMTV.
Eventually the assailants give up, the three men jump back into their van and the fourth suspect – the driver – makes a getaway.
Another woman who witnessed the scene told BFMTV, 'I went out into the street and saw this man lying on the ground with a pistol next to him, quite bloody.'
Once the attack was over, the victims were helped by people on the street. All three of them sustained light injuries and were treated in hospital, BFMTV reported. The woman, who according to the news outlet was five months pregnant, was treated for shock, while her partner's face was covered in blood.
The woman in the footage is the daughter of the CEO and co-founder of Paymium, a French cryptocurrency exchange platform, according to AFP.
No arrests have yet been made in relation to the incident, CNN understands. The Paris prosecutor's office told CNN that it has opened an investigation into offenses of attempted arrest, abduction, kidnapping or arbitrary detention committed by an organized gang, aggravated violence and participation in a criminal association.
The attack on Tuesday follows the abductions of other cryptocurrency figures in France.
In January, David Balland, a co-founder of French crypto firm Ledger, had his hand mutilated after he and his wife were kidnapped from their home in central France. They were freed after a police operation. Part of the ransom demanded by the kidnappers was paid, Reuters reported.
France's Interior Minister, Bruno Retailleau, announced Wednesday he would hold a meeting with cryptocurrency entrepreneurs to discuss security in light of the spate of attacks, according to AFP.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
6 hours ago
- Yahoo
Trump offers no rest for lifelong US activist couple
They've lost count of how many times they've been arrested, but even with a combined age of 180 years, American couple Joseph and Joyce Ellwanger are far from hanging up their activist boots. The pair, who joined the US civil rights rallies in the 1960s, hope protesting will again pay off against Donald Trump, whose right-wing agenda has pushed the limits of presidential power. "Inaction and silence do not bring about change," 92-year-old Joseph, who uses a walker, told AFP at a rally near Milwaukee in late April. He was among a few hundred people protesting the FBI's arrest of Judge Hannah Dugan, who is accused of helping an undocumented man in her court evade migration authorities. By his side -- as always -- was Joyce, 88, carrying a sign reading "Hands Off Hannah." They are certain that protesting does make a difference, despite some Americans feeling despondent about opposing Trump in his second term. "The struggle for justice has always had so much pushback and difficulty that it almost always appeared as though we'll never win," Joseph said. "How did slavery end? How did Jim Crow end? How did women get the right to vote? It was the resilience and determination of people who would not give up," he added. "Change does happen." The couple, who have been married for more than 60 years, can certainly speak from experience when it comes to protesting. Joseph took part in strategy meetings with Martin Luther King Jr -- the only white religious leader to do so -- after he became pastor of an all-Black church in Alabama at the age of 25. He also joined King in the five-day, 54-mile march from Selma to Montgomery in 1965, which historians consider a pivotal moment in the US civil rights movement. Joyce, meanwhile, was jailed for 50 days after she rallied against the US military training of soldiers from El Salvador in the 1980s. Other causes taken up by the couple included opposing the Iraq war in the early 2000s. "You do what you have to do. You don't let them stop you just because they put up a blockade. You go around it," Joyce told AFP. - 'We'll do our part' - Joseph admitted he would like to slow down, noting the only time he and his wife unplug is on Sunday evening when they do a Zoom call with their three adult children. But Trump has kept them active with his sweeping executive actions -- including crackdowns on undocumented migrants and on foreign students protesting at US universities. The threats to younger protesters are particularly concerning for Joyce, who compared those demonstrating today to the students on the streets during the 1960s. "They've been very non-violent, and to me, that's the most important part," she said. Joyce also acknowledged the couple likely won't live to see every fight to the end, but insisted they still had a role to play. "We're standing on the shoulders of people who have built the justice movement and who have brought things forward. So, we'll do our part," she said. Joyce added that she and Joseph would be protesting again on June 14 as part of the national "No Kings" rally against Trump. "More people are taking to the streets, we will also be in the street," she said. str/bjt/nl/mlm
Yahoo
6 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl
Yahoo
6 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl