logo
Italy's far-right League faces complaint over ‘racist, Islamophobic' AI-generated images

Italy's far-right League faces complaint over ‘racist, Islamophobic' AI-generated images

Arab News18-04-2025

LONDON: Italy's far-right League party has been referred to the country's communications watchdog after opposition parties filed a complaint over 'racist, Islamophobic and xenophobic' images generated by artificial intelligence and shared on social media by deputy prime minister and party leader Matteo Salvini.
The complaint was submitted to Agcom, Italy's communications regulatory authority, on Thursday by the center-left Democratic Party, along with the Greens and Left Alliance. It alleges the images published by the League contained 'almost all categories of hate speech,' according to The Guardian, which first reported the story.
'In the images published by Salvini's party and generated by AI there are almost all categories of hate speech, from racism and xenophobia to Islamophobia. They are using AI to target specific categories of people — immigrants, Arabs — who are portrayed as potential criminals, thieves and rapists,' said Antonio Nicita, a PD senator.
Nicita also criticized the decision to blur the faces of the supposed victims, calling it 'deceptive' and accusing the League of intentionally misleading users into believing the images were real.
Emilio Borrelli, an MP with the Greens and Left Alliance, said the images were 'part of their strategy to create fear among citizens' and 'incite hate.'
Over the past month, dozens of apparently AI-generated images have been posted across the League's social media channels, including Facebook, Instagram and X. Many depict men of colour, often armed with knives, attacking women or police officers.
A spokesperson for Salvini's party confirmed some of the pictures were digitally generated but insisted: 'The point is not the image. The point is the fact,' adding the posts were 'based on true reports from Italian newspapers.'
However, AI forensic experts have stated all the images in question bore clear signs of being artificially generated. They also noted that while platforms are required to label AI-generated content, in most cases automatic detection tools failed to do so.
In one of the posts cited in the complaint, a mother and father in Islamic dress appear to be shouting angrily at a young girl — a portrayal the complainants say fuels racial and Islamophobic stereotypes. The newspaper cited in the post, Il Giorno, makes no reference to the family's religion and does not include any photographs. The only detail given was that the child had attended Arabic language classes.
As The Guardian reported, the use of AI-generated imagery by far-right parties across Europe has surged in recent months. The targets are often refugees from conflict zones such as Syria, Sudan and sub-Saharan Africa, as well as people from other minority backgrounds. These depictions frequently invoke the debunked 'Great Replacement' conspiracy theory, which falsely claims that immigration is part of a plot to erode European identity and culture.
Salvini, who has capitalized on rising refugee arrivals in Europe to maintain a prominent role in Italian politics and advocate for stricter immigration policies, has frequently made headlines for inflammatory remarks, including calling immigrants — often men — 'dogs and pigs.' In late 2024, he was acquitted of charges of kidnapping and dereliction of duty after judges ruled that the evidence presented by prosecutors was insufficient to convict him. The case stemmed from a 2019 incident in which Salvini, then interior minister, refused to allow a Spanish migrant rescue ship to dock in an Italian port, leaving those on board stranded at sea for 19 days.
Asked whether the League was aware the images could incite hate, a party spokesperson said: 'We are sorry, but our solidarity goes to the victims, not the perpetrators. If denouncing crimes committed by foreigners means 'xenophobia', perhaps the problem is not the word but those who use it to censor debate. We will continue to denounce, with strong words and images, what others prefer to ignore.''
If Agcom finds the League's content in violation of regulations, it could act under the EU's Digital Services Act, which allows it to order the removal of posts, shut down accounts or impose fines on social media platforms for failing to moderate harmful content.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI-generated Pope Leo sermons flood YouTube, TikTok
AI-generated Pope Leo sermons flood YouTube, TikTok

Al Arabiya

time7 hours ago

  • Al Arabiya

AI-generated Pope Leo sermons flood YouTube, TikTok

AI-generated videos and audios of Pope Leo XIV are rapidly proliferating online, racking up views as platforms struggle to police them. An AFP investigation identified dozens of YouTube and TikTok pages that have been churning out AI-generated messages delivered in the pope's voice or otherwise attributed to him since he took charge of the Catholic Church last month. The hundreds of fabricated sermons and speeches, in English and Spanish, underscore how easily hoaxes created using artificial intelligence can elude detection and dupe viewers. 'There's natural interest in what the new pope has to say, and people don't yet know his stance and style,' said University of Washington professor emeritus Oren Etzioni, founder of a nonprofit focused on fighting deepfakes. 'A perfect opportunity to sow mischief with AI-generated misinformation.' After AFP presented YouTube with 26 channels posting predominantly AI-generated pope content, the platform terminated 16 of them for violating its policies against spam, deceptive practices, and scams, and another for violating YouTube's terms of service. 'We terminated several channels flagged to us by AFP for violating our spam policies and Terms of Service,' spokesperson Jack Malon said. The company also booted an additional six pages from its partner program allowing creators to monetize their content. TikTok similarly removed 11 accounts that AFP pointed out—with over 1.3 million combined followers—citing the platform's policies against impersonation, harmful misinformation, and misleading AI-generated content of public figures. With names such as 'Pope Leo XIV Vision,' the social media pages portrayed the pontiff supposedly offering a flurry of warnings and lessons he never preached. But disclaimers annotating their use of AI were often hard to find—and sometimes nonexistent. On YouTube, a label demarcating 'altered or synthetic content' is required for material that makes someone appear to say something they did not. But such disclosures only show up toward the bottom of each video's click-to-open description. A YouTube spokesperson said the company has since applied a more prominent label to some videos on the channels flagged by AFP that were not found to have violated the platform's guidelines. TikTok also requires creators to label posts sharing realistic AI-generated content, though several pope-centric videos went unmarked. A TikTok spokesperson said the company proactively removes policy-violating content and uses verified badges to signal authentic accounts. Brian Patrick Green, director of technology ethics at Santa Clara University, said the moderation difficulties stem from rapid AI developments inspiring 'chaotic uses of the technology.' Many clips on the YouTube channels AFP identified amassed tens of thousands of views before being deactivated. On TikTok, one Spanish-language video received 9.6 million views while claiming to show Leo preaching about the value of supportive women. Another, which carried an AI label but still fooled viewers, was watched some 32.9 million times. No video on the pope's official Instagram page has more than 6 million views. Experts say even seemingly harmless fakes can be problematic, especially if used to farm engagement for accounts that might later sell their audiences or pivot to other misinformation. The AI-generated sermons not only 'corrode the pope's moral authority' and 'make whatever he actually says less believable,' Green said, but could be harnessed 'to build up trust around your channel before having the pope say something outrageous or politically expedient.' The pope himself has also warned about the risks of AI, while Vatican News called out a deepfake that purported to show Leo praising Burkina Faso leader Ibrahim Traoré, who seized power in a 2022 coup. AFP also debunked clips depicting the pope, who holds American and Peruvian citizenships, criticizing US Vice President JD Vance and Peru's President Dina Boluarte. 'There's a real crisis here,' Green said. 'We're going to have to figure out some way to know whether things are real or fake.'

People must see themselves in the AI revolution
People must see themselves in the AI revolution

Arab News

time14 hours ago

  • Arab News

People must see themselves in the AI revolution

President Donald Trump's historic visit to Saudi Arabia was not merely another high-profile diplomatic stop. It was a signal, one that reverberates far beyond ceremonial pageantry or economic accords. With a sweeping agenda anchored in regional security and technological advancement, the visit marked a profound turning point: the introduction of artificial intelligence as a centerpiece in reimagining international alliances and national futures. As Saudi Arabia deepens its strategic commitment to AI, the spotlight now turns to a less discussed — yet far more consequential — question: Who truly owns the AI revolution? For too long, the narrative has belonged to technologists. From Silicon Valley labs to national AI strategies, the story of AI has been told in the language of algorithms, architectures, and compute. And while the technical infrastructure is essential, we argue that such a narrow view of AI is not only incomplete, it is dangerous. When the American Institute of Artificial Intelligence and Quantum was launched in the US in 2016, the institutional landscape for AI was highly specialized. Data scientists, computer engineers, and mathematicians dominated the discourse. Policymakers and business leaders, overwhelmed by complexity, often stood at a distance. AI was regarded as something technical — a toolset, a model, an optimization system. The same pattern is now emerging in Saudi Arabia and across the Gulf. Government agencies are in search of use cases. Consultants are offering solutions in search of problems. Infrastructure projects are underway to create sovereign large language models and national AI platforms. In these efforts, AI is often reduced to a software engineering challenge — or worse, a procurement exercise. But this lens fails to capture the essence of the revolution underway. What's at stake is not simply how nations compute. It's how they think, organize, and act in a new age of machine cognition. We've long argued that AI cannot — and must not — be the exclusive domain of technologists. A true revolution occurs only when the masses engage. Just as the internet went mainstream not through protocols and standards, but through wide-scale adoption and imaginative use, AI must be demystified and integrated into the fabric of society. It is neither feasible nor necessary to turn an entire nation into data scientists. We need a nation of informed leaders, innovators, teachers, managers, and citizens who can speak the language of AI, not in code, but in context. This conviction led AIAIQ to become the world's first applied AI institute focused not on producing more PhDs, but on educating professionals across sectors — from finance and healthcare to logistics and public service. Our mission was clear: to build a movement of AI adoption engineering, centered on human understanding, social responsibility, and economic impact. History has shown that every technological revolution requires more than invention. It requires meaning. When the automobile first arrived in America, it was met with skepticism. Roads were unprepared. Public opinion was divided. Without storytelling, explanation, and cultural adaptation, the car might have remained a niche novelty. AI is no different, but the stakes are higher. Unlike past revolutions, AI directly threatens to reshape or eliminate jobs across virtually all sectors. It raises moral questions about decision-making, power, privacy, and the nature of intelligence itself. Without a serious effort to prepare populations, the result will be confusion, fear, and backlash. Adoption is not just about teaching Python or TensorFlow. It is about building cognitive readiness in society — a collective ability to make sense of AI as a force that operates both with us and around us. What's at stake is not simply how nations compute. It's how they think, organize, and act in a new age of machine cognition. Ali Naqvi and Mohammed Al-Qarni AIAIQ's work in the US, and now in the Kingdom, reflects this ethos. We don't approach AI as a product to be sold. We approach it as a paradigm to be understood, negotiated, and lived. Over nearly a decade of pioneering applied AI education, we've identified four essential elements for ensuring that technological revolutions — especially this one — take root meaningfully within society. People need help interpreting what AI actually is and how it is changing their world. It's not just a black box; it's a new kind of collaborator, a new model of thought. Technologies cannot remain in labs or behind firewalls. They must be translated into the language and workflow of everyday people. Mass understanding is more vital than mass compute. Every revolution carries moral implications. If not carefully navigated, AI can create a deep dissonance between traditional societal values and new forms of digital governance. Above all, people must see themselves in the revolution. They must feel empowered to participate, to lead, and to shape what comes next. Much has been made of 'sovereign AI' — the ambition of nations to build homegrown LLMs and nationalized data infrastructure. Several Gulf nations are investing heavily in this vision. And yet, we caution: True sovereignty is not measured by the size of your datacenter, but by the sophistication of your human capital. You can localize your AI stack, but unless you cultivate a generation of researchers, engineers, business innovators, and public thinkers, your systems will be technologically impressive but strategically hollow. Sovereignty is about stewardship. That requires education, experimentation, and the freedom to adapt. As Saudi Arabia targets massive economic transformation, the challenge is not just to build smart systems, but to build a smart society that knows what to do with them. President Trump's visit, and the unprecedented alignment between American and Saudi priorities around AI, is not just symbolic. It marks a deeper shift in how global partnerships are defined. Oil once defined alliances. Now, intelligence — both human and machine — will. For the first time, nations are collaborating not to dominate territory, but to co-develop cognition. The tools may be digital, but the outcome will be profoundly human. The alignment between global and local initiatives in Saudi Arabia represents a shared belief that the future is not only coded in silicon but shaped in classrooms, boardrooms, war rooms, and living rooms. The AI revolution is coming. But it must belong to the people. Otherwise, it will never become a revolution. • Mohammed Al-Qarni is a leading voice in AI policy and governance in the Gulf and Ali Naqvi is the founder of the American Institute of Artificial Intelligence and Quantum.

Reddit Sues AI Giant Anthropic Over Content Use
Reddit Sues AI Giant Anthropic Over Content Use

Asharq Al-Awsat

timea day ago

  • Asharq Al-Awsat

Reddit Sues AI Giant Anthropic Over Content Use

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation. The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution. Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development. "This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said. According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training. The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months. Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial. In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously." Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform. Those deals have helped lift Reddit's share price since it went public in 2024. Reddit shares closed up more than six percent on Wednesday following news of the lawsuit. Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation. Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store