logo
#

Latest news with #radicalization

Why do so many Americans join the Israeli military?
Why do so many Americans join the Israeli military?

The Guardian

time6 days ago

  • General
  • The Guardian

Why do so many Americans join the Israeli military?

In his 1971 novel The Day of the Jackal, Frederick Forsyth renders a rich plot to assassinate Charles de Gaulle, the French president. The conspirators are pied-noirs, the term used to describe Frenchmen born in Algeria during the colonial occupation there. They grieve De Gaulle's exit from north Africa, which they regard as a betrayal. Unable to remain in the former colony, they return home – dejected and emasculated – and murderous. In many ways, the pied-noirs regard themselves as being more French than the French. The novel derives some of its appeal from the fact that it's rooted in history – revanchist Frenchmen made at least six attempts to assassinate De Gaulle in the 1960s. Yigal Amir, the Israeli settler who assassinated Yitzhak Rabin in 1994, reportedly devoured the book, and drew inspiration from it. I began to reflect on the novel after reading about the recent Hamas-US prisoner deal. Edan Alexander, the American Israeli soldier who was held captive by Hamas for a year and a half, 'grew up in New Jersey and moved to Israel after high school to join the military', as reported by the New York Times. When I read that line I wondered what drove his radicalization – what leads an American teenager to travel to a foreign country to join an army whose primary occupation is apartheid? The question is meaningful in its particulars, but it also highlights a broader phenomenon: Alexander's path is not remotely unique. The Washington Post reported in February 2024 that 'an estimated 23,380 American citizens currently serve in Israeli ranks'. But they have traveled a trail worn and bloodied by others. Baruch Goldstein, an American Zionist who murdered 29 Palestinians in a mosque in Hebron in 1994, was from Brooklyn. The Post story, which profiles the families of Americans who died serving in the Israeli army, describes their 'fierce commitment to the Jewish state'. Two of the three families have lived or volunteered in settlements – the apartheid infrastructure Israel has built in the West Bank. One mother describes her son, who died while perpetrating a genocide in Gaza, as 'more Israeli than the Israelis'. A father describes his family's journey from America by saying: 'We came for Zionism.' The story goes on to describe the elaborate social apparatus through which young Americans are radicalized. One soldier who was killed in Gaza 'worked each year at a Zionist summer camp in Pennsylvania'. Reading the article, I got a strong sense of the brainwashing, the in-group dynamic at work. The families seem to regard their choices, and those of their children, as being normal – valiant, even. To be sure, the phenomenon of Americans joining foreign armies is not unique to Zionists or Israel. NPR reports that hundreds of Americans are fighting alongside Ukrainians in their war against the Russian occupation. But hundreds is not the same as tens of thousands, and fighting occupation is the opposite of investing in and propagating it. Now, with the genocide in Palestine, we're faced with a reality in which tens of thousands of Americans are actively involved in war crimes. They are part of an army responsible for the murder of more than 20,000 children in Gaza, where the Economist estimates that Israeli soldiers have killed between 77,000 and 109,000 people, or 4-5% of the territory's population in 2023. The radicalization of young Zionist men and women does not receive the attention it deserves by the FBI and law enforcement – as contrasted with the experience of Muslims, which is described by the writer Arun Kundnani in his book, The Muslims are Coming. The reason for their hesitation goes first to the history of antisemitism in the west, where Jewish people have been accused of harboring dual loyalties for hundreds of years. The Dreyfus Affair in France – in which a Jewish officer was falsely accused of treason – acts as exemplar here. And in Germany, Jewish veterans of the first world war found that they were Jewish before they were German. Berthold Guthmann, for example, received the Iron Cross for bravery in the first world war. He was murdered at Auschwitz in 1944 by his former colleagues. Good people do not want to be accused of antisemitism. And if talking about a headache makes it worse, it's better not to talk at all. But more than antisemitism, there's the fact of America's establishment affinity for Israel – which recalls the French sympathy for the pied-noirs in the 1950s. In Congress, Brian Mast has been known to wear the uniform of the Israeli military while performing official duties. He also volunteered for the Israeli army. The affinity is similar among Democrats, where Chuck Schumer told a New York Times columnist 'My job … is to keep the left pro-Israel.' The tendency to regard Israel as an extension of the United States exists within media as well. In an interview with Ta-Nehisi Coates, a CBS anchor described the author's work on Palestine as resembling 'extremist' writings. The network later distanced itself from the anchor's statements and behavior. A more recent example took place in May. In a tense interview on MSNBC, the Pulitzer prize-winning poet Mosab Abu Taha highlighted the fact that Israeli soldiers – men and women – are perpetrating mass murder in Gaza. Abu Taha went on to recount the stories of his own family who have been killed by Israeli pilots. He described how some of their bodies are irrecoverable – they have lain under the rubble of their bombed homes for more than 500 days. Abu Taha, through his clear description of the depredations of Israeli troops – and his unrelenting focus on their victims – offers a path. One can hope that American mothers and fathers may watch his interview, and others like it, and say: 'No, I do not want my son to be radicalized, to participate in an atrocity.' Surely, their love for their children demands it. Ahmed Moor is a writer and fellow at the Foundation for Middle East Peace

If algorithms radicalize a mass shooter, are companies to blame?
If algorithms radicalize a mass shooter, are companies to blame?

The Verge

time27-05-2025

  • Politics
  • The Verge

If algorithms radicalize a mass shooter, are companies to blame?

In New York court on May 20th, lawyers for nonprofit Everytown for Gun Safety argued that Meta, Amazon, Discord, Snap, 4chan, and other social media companies all bear responsibility for radicalizing a mass shooter. The companies defended themselves against claims that their respective design features — including recommendation algorithms — promoted racist content to a man who killed 10 people in 2022, then facilitated his deadly plan. It's a particularly grim test of a popular legal theory: that social networks are products that can be found legally defective when something goes wrong. Whether this works may rely on how courts interpret Section 230, a foundational piece of internet law. In 2022, Payton Gendron drove several hours to the Tops supermarket in Buffalo, New York, where he opened fire on shoppers, killing 10 people and injuring three others. Gendron claimed to have been inspired by previous racially motivated attacks. He livestreamed the attack on Twitch and, in a lengthy manifesto and a private diary he kept on Discord, said he had been radicalized in part by racist memes and intentionally targeted a majority-Black community. Everytown for Gun Safety brought multiple lawsuits over the shooting in 2023, filing claims against gun sellers, Gendron's parents, and a long list of web platforms. The accusations against different companies vary, but all place some responsibility for Gendron's radicalization at the heart of the dispute. The platforms are relying on Section 230 of the Communications Decency Act to defend themselves against a somewhat complicated argument. In the US, posting white supremacist content is typically protected by the First Amendment. But these lawsuits argue that if a platform feeds it nonstop to users in an attempt to keep them hooked, it becomes a sign of a defective product — and, by extension, breaks product liability laws if that leads to harm. That strategy requires arguing that companies are shaping user content in ways that shouldn't receive protection under Section 230, which prevents interactive computer services from being held liable for what users post, and that their services are products that fit under the liability law. 'This is not a lawsuit against publishers,' John Elmore, an attorney for the plaintiffs, told the judges. 'Publishers copyright their material. Companies that manufacture products patent their materials, and every single one of these defendants has a patent.' These patented products, Elmore continued, are 'dangerous and unsafe' and are therefore 'defective' under New York's product liability law, which lets consumers seek compensation for injuries. Some of the tech defendants — including Discord and 4chan — don't have proprietary recommendation algorithms tailored to individual users, but the claims against them allege that their designs still aim to hook users in a way that predictably encouraged harm. 'This community was traumatized by a juvenile white supremacist who was fueled with hate — radicalized by social media platforms on the internet,' Elmore said. 'He obtained his hatred for people who he never met, people who never did anything to his family or anything against him, based upon algorithm-driven videos, writings, and groups that he associated with and was introduced to on these platforms that we're suing.' These platforms, Elmore continued, own 'patented products' that 'forced' Gendron to commit a mass shooting. A meme-fueled shooting In his manifesto, Gendron called himself an 'eco-fascist national socialist' and said he had been inspired by previous mass shootings in Christchurch, New Zealand, and El Paso, Texas. Like his predecessors, Gendron wrote that he was concerned about 'white genocide' and the great replacement: a conspiracy theory alleging that there is a global plot to replace white Americans and Europeans with people of color, typically through mass immigration. Gendron pleaded guilty to state murder and terrorism charges in 2022 and is currently serving life in prison. According to a report by the New York attorney general's office, which was cited by the plaintiff's lawyers, Gendron 'peppered his manifesto with memes, in-jokes, and slang common on extremist websites and message boards,' a pattern found in some other mass shootings. Gendron encouraged readers to follow in his footsteps, and urged extremists to spread their message online, writing that memes 'have done more for the ethno-nationalist movement than any manifesto.' Citing Gendron's manifesto, Elmore told judges that before Gendron was 'force-fed online white supremacist materials,' Gendron never had any problems with or animosity toward Black people. 'He was encouraged by the notoriety that the algorithms brought to other mass shooters that were streamed online, and then he went down a rabbit hole.' Everytown for Gun Safety sued nearly a dozen companies — including Meta, Reddit, Amazon, Google, YouTube, Discord, and 4chan — over their alleged role in the shooting in 2023. Last year, a federal judge allowed the suits to proceed. Racism, addiction, and 'defective' design The racist memes Gendron was seeing online are undoubtedly a major part of the complaint, but the plaintiffs aren't arguing that it's illegal to show someone racist, white supremacist, or violent content. In fact, the September 2023 complaint explicitly notes that the plaintiffs aren't seeking to hold YouTube 'liable as the publisher or speaker of content posted by third parties,' partly because that would give YouTube ammunition to get the suit dismissed on Section 230 grounds. Instead, they're suing YouTube as the 'designers and marketers of a social media product … that was not reasonably safe and that was reasonably dangerous for its intended use.' Their argument is that YouTube and other social media website algorithms' addictive nature, when coupled with their willingness to host white supremacist content, makes them unsafe. 'A safer design exists,' the complaint states, but YouTube and other social media platforms 'have failed to modify their product to make it less dangerous because they seek to maximize user engagement and profits.' The plaintiffs made similar complaints about other platforms. Twitch, which doesn't rely on algorithmic generations, could alter its product so the videos are on a time delay, Amy Keller, an attorney for the plaintiffs, told judges. Reddit's upvoting and karma features create a 'feedback loop' that encourages use. 4chan doesn't require users to register accounts, allowing them to post extremist content anonymously. 'There are specific types of defective designs that we talk about with each of these defendants,' Keller said, adding that platforms that have algorithmic recommendation systems are 'probably at the top of the heap when it comes to liability.' During the hearing, the judges asked the plaintiffs' attorneys if these algorithms are always harmful. 'I like cat videos, and I watch cat videos; they keep sending me cat videos,' one of the judges said. 'There's a beneficial purpose, is there not? There's some thought that without algorithms, some of these platforms can't work. There's just too much information.' After agreeing that he loves cat videos, Glenn Chappell, another attorney for the plaintiffs, said the issue lies with algorithms 'designed to foster addiction and the harms resulting from that type of addictive mechanism are known.' In those instances, Chappell said, 'Section 230 does not apply.' The issue was 'the fact that the algorithm itself made the content addictive,' Keller said. Third-party content and 'defective' products The platforms' lawyers, meanwhile, argued that sorting content in a particular way shouldn't strip them of protections against liability for user-posted content. While the complaint may argue it's not saying web services are publishers or speakers, the platforms' defense counters that this is still a case about speech where Section 230 applies. 'Case after case has recognized that there's no algorithms exception to the application of Section 230,' Eric Shumsky, an attorney for Meta, told judges. The Supreme Court considered whether Section 230 protections applied to algorithmically recommended content in Gonzalez v. Google, but in 2023, it dismissed the case without reaching a conclusion or redefining the currently expansive protections. Shumsky contended that algorithms' personalized nature prevents them from being 'products' under the law. 'Services are not products because they are not standardized,' Shumsky said. Unlike cars or lawnmowers, 'these services are used and experienced differently by every user,' since platforms 'tailor the experiences based on the user's actions.' In other words, algorithms may have influenced Gendron, but Gendron's beliefs also influenced the algorithms. Section 230 is a common counter to claims that social media companies should be liable for how they run their apps and websites, and one that's sometimes succeeded. A 2023 court ruling found that Instagram, for instance, wasn't liable for designing its service in a way that allowed users to transmit harmful speech. The accusations 'inescapably return to the ultimate conclusion that Instagram, by some flaw of design, allows users to post content that can be harmful to others,' the ruling said. Last year, however, a federal appeals court ruled that TikTok had to face a lawsuit over a viral 'blackout challenge' that some parents claimed led to their children's deaths. In that case, Anderson v. TikTok, the Third Circuit court of appeals determined that TikTok couldn't claim Section 230 immunity, since its algorithms fed users the viral challenge. The court ruled that the content TikTok recommends to its users isn't third-party speech generated by other users; it's first-party speech, because users see it as a result of TikTok's proprietary algorithm. The Third Circuit's ruling is anomalous, so much so that Section 230 expert Eric Goldman called it 'bonkers.' But there's a concerted push to limit the law's protections. Conservative legislators want to repeal Section 230, and a growing number of courts will need to decide whether users of social networks are being sold a dangerous bill of goods — not simply a conduit for their speech.

Germany updates: Police chief warns of youth radicalization
Germany updates: Police chief warns of youth radicalization

Yahoo

time24-05-2025

  • Yahoo

Germany updates: Police chief warns of youth radicalization

The head of Germany's Federal Criminal Police Office (BKA), Holger Münch, has told newspapers that some young people are organizing themselves in groups to commit "serious crimes" after being radicalized by far-right ideologies. His remarks come after German police this week cracked down on a far-right extremist cell with members as young as 14. Train services at Hamburg's main station are meanwhile back to normal after 18 people were injured on Friday in a knife attack by a female suspect. This is a roundup of the top news stories from Germany on May 24, 2025. Train services at Hamburg's main station have resumed normal operations after a knife attack on Friday that left 18 injured, a spokeswoman for train operator Deutsche Bahn told the DPA news agency. A 39-year-old woman was arrested at the scene on suspicion of carrying out the attack. She is to come before a magistrate on Saturday. Four of the 18 wounded suffered life-threatening inuries, while six were seriously hurt, officials said. So far, police do not believe the attack was politically or ideologically motivated but was rather the result of some kind of psychological distress on the part of the attacker. The head of Germany's Federal Criminal Police Office (BKA), Holger Münch, has warned that young people within right-wing extremist circles are becoming increasingly radicalized. "For about a year, we've increasingly seen very young people with right-wing views becoming more radicalized and forming, at times, well-organized groups to carry out serious crimes," Münch told the Funke media group of newspapers in remarks published on Saturday. He said the internet was a major factor aiding the far-right scene to spread its network. "Radicalization, recruitment and mobilization increasingly happen via social networks and right-wing forums," Münch said. The BKA head said right-wing crime was posing a "major challenge" to security agencies but that general society also had a big role to play in reducing the threat. His remarks follow the arrests this week of five male suspects aged 14 to 18 who were members of a far-right extremist cell alleged to have plotted violent attacks on migrants. The head of Germany's federal crime agency, Holger Münch, has told newspapers that young people are increasingly falling under the thrall of far-right extremist ideologies, with some prepared to commit "serious crimes." Meanwhile, train services at the main station in the northern port city of Hamburg have resumed full operations after disruption caused on Friday by a knife attack carried out by a suspected female assailant in which several were injured. DW's Bonn newsroom keeps you up to speed with the latest headlines from Germany at a time when Europe's economic powerhouse is facing several major challenges from within and abroad.

German police chief warns of rising right-wing youth radicalization
German police chief warns of rising right-wing youth radicalization

Yahoo

time24-05-2025

  • Politics
  • Yahoo

German police chief warns of rising right-wing youth radicalization

The head of Germany's Federal Criminal Police Office (BKA), Holger Münch, has issued a warning about the increasing radicalization of young people within right-wing extremist circles. "For about a year, we've increasingly seen very young people with right-wing views becoming more radicalized and forming, at times, well-organized groups to carry out serious crimes," Münch told the Funke media group of newspapers in remarks published on Saturday. He highlighted the growing role of the internet as a networking space for the far-right scene. "Radicalization, recruitment and mobilization increasingly happen via social networks and right-wing forums," Münch said. The high number and severity of far-right motivated crimes pose a "major challenge" to security agencies, which are responding with increased surveillance, according to Münch. Münch emphasized that tackling the issue is not solely the responsibility of the police, but a challenge that requires joint effort across all parts of society to prevent serious acts of violence. Earlier this week, German federal prosecutors launched a crackdown on a far-right extremist cell accused of plotting violent attacks targeting migrants. Five male suspects aged 14 to 18 were arrested in coordinated raids. The teens are accused of being part of - or in one case supporting - a group that calls itself the Last Wave of Defence. According to prosecutors, the group aimed to destabilize Germany's democratic system through acts of violence, primarily targeting migrants and political opponents.

This AI scans Reddit for ‘extremist' terms and plots bot-led intervention
This AI scans Reddit for ‘extremist' terms and plots bot-led intervention

Fast Company

time23-05-2025

  • Fast Company

This AI scans Reddit for ‘extremist' terms and plots bot-led intervention

A computer science student is behind a new AI tool designed to track down Redditors showing signs of radicalization and deploy bots to 'deradicalize' them through conversation. First reported by 404 Media, PrismX was built by Sairaj Balaji, a computer science student at SRMIST in Chennai, India. The tool works by analyzing posts for specific keywords and patterns associated with extreme views, giving those users a 'radical score.' High scorers are then targeted by AI bots programmed to attempt deradicalization through engaging the user in conversation. According to the federal government, the primary terror threat to the U.S. now is individuals radicalized to violence online through social media. At the same time, fears around surveillance technology and artificial intelligence infiltrating online communities pose an ethical minefield. Responding to concerns, Balaji clarified in a Linkedin post that the conversation part of the tool has not been tested on real Reddit users without consent. Instead, the scoring and conversation elements were used in simulated environments for research-purposes only. 'The tool was designed to provoke discussion, not controversy,' he explained in the post. 'We're at a point in history where rogue actors and nation-states are already deploying weaponized AI. If a college student can build something like PrismX, it raises urgent questions: Who's watching the watchers?' While Balaji doesn't claim to be an expert in deradicalization, as an engineer, he is interested in the ethical implications of surveillance technology. 'Discomfort sparks debate. Debate leads to oversight. And oversight is how we prevent the misuse of emerging technologies,' he continued. This isn't the first time Redditors have been used as guinea pigs in recent months. Just last month, researchers from the University of Zurich faced intense backlash after experimenting on an unsuspecting subreddit. The research involved deploying AI-powered bots into the r/ChangeMyView subreddit, which positions itself as a 'place to post an opinion you accept may be flawed', in an experiment to see if AI could be used to change peoples' minds. When Redditors, and Reddit itself, found out they were being experimented on without their knowledge, they weren't impressed. Reddit's chief legal officer, Ben Lee, wrote in a post that neither Reddit nor the r/changemyview mods knew about the experiment ahead of time. 'What this University of Zurich team did is deeply wrong on both a moral and legal level,' Lee wrote. 'It violates academic research and human rights norms, and is prohibited by Reddit's user agreement and rules, in addition to the subreddit rules.' While PrismX is not currently being tested on real unconsenting users, it piles on the ever-growing question of the role of artificial intelligence in human spaces.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store