logo
How AI tools can threaten cultural diversity

How AI tools can threaten cultural diversity

The Star01-05-2025

It turns out that, while AI boosts writing speed, it also profoundly transforms personal styles. This phenomenon was particularly noticeable among the Indian participants, whose writing style became much more Americanised. — AFP Relaxnews
Artificial intelligence is now widely used. Presented as an everyday ally, promising to make our lives easier and reimagine the way we write, it nonetheless carries a major risk. A US study claims that by imposing Western writing standards, AI could smooth out styles and erase cultural particularities.
To measure this threat, a team from Cornell University, led by Professor Aditya Vashistha, conducted a ground-breaking experiment with 118 American and Indian participants. Each of them was asked to write texts on cultural themes, with or without the help of an AI writing assistant. The aim was to observe the influence of AI on their respective styles.
It turns out that, while AI boosts writing speed, it also profoundly transforms personal styles. This phenomenon was particularly noticeable among the Indian participants, whose writing style became much more Americanised. To adapt to the AI's suggestions, they often had to make numerous changes.
"When Indian users use writing suggestions from an AI model, they start mimicking American writing styles to the point that they start describing their own festivals, their own food, their own cultural artifacts from a Western lens," explains Dhruv Agarwal, a doctoral student at Cornell and first author of the study, quoted in a news release.
A detailed analysis of the texts shows that the Indian participants accepted 25% of the AI's suggestions, compared to 19% for their American counterparts. At the same time, Indians were significantly more likely to modify the AI's suggestions to fit their topic and writing style to maintain cultural relevance. For example, AI typically suggested "Christmas' to evoke a favourite holiday, overlooking Diwali, one of the country's biggest festivals.
This bias is no mere anecdote. The authors denounce a veritable form of "AI colonialism', an insidious cultural domination in which Western standards are imposed to the detriment of other identities. And the consequences are far-reaching. By standardising the way they write, people could end up seeing their own culture through a foreign lens, to the point of altering their individual perception of it.
"This is one of the first studies, if not the first, to show that the use of AI in writing could lead to cultural stereotyping and language homogenisation," says Aditya Vashistha. "People start writing similarly to others, and that's not what we want. One of the beautiful things about the world is the diversity that we have."
Professor Aditya Vashistha and colleagues are well aware of this and are calling for a change of direction. Cornell's Global AI Initiative is already looking to join forces with industry to build policies and tools that are more attentive to cultural specificities.
The stakes are immense. It's a question of safeguarding the richness and diversity of human expression, protecting the plurality of voices and imaginations, and preventing digital homogenisation. Indeed, defending cultural diversity in the face of AI is not just an ethical choice, it's a collective emergency. – AFP Relaxnews

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis
PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis

Malay Mail

time9 hours ago

  • Malay Mail

PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis

A research team led by Prof. Changwen Chen, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, has developed a novel video-language agent VideoMind that allows AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-LoRA strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size Chen said, "Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process."AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI Chen added, "VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more."Hashtag: #PolyU #AI #LLMs #VideoAnalysis #IntelligentSurveillance #VideoSearch The issuer is solely responsible for the content of this announcement.

Digital Ministry outlines three key initiatives to strengthen Malaysia's AI ecosystem
Digital Ministry outlines three key initiatives to strengthen Malaysia's AI ecosystem

The Sun

time9 hours ago

  • The Sun

Digital Ministry outlines three key initiatives to strengthen Malaysia's AI ecosystem

JOHOR BAHRU: The Ministry of Digital has outlined three key initiatives aimed at strengthening Malaysia's artificial intelligence (AI) ecosystem, in a move to position the country as a regional hub for AI innovation by 2030. Deputy Digital Minister Datuk Wilson Ugak Kumbong, said these initiatives include the National Artificial Intelligence (AI) Roadmap (2016–2030), AI Code of Ethics, and the establishment of a dedicated AI Centre of Excellence. Speaking at the Artificial Intelligence and Robotics Festival 2025 (AirFest 2025) at Universiti Teknologi Malaysia (UTM), Ugak said the AI Roadmap, originally developed under the Ministry of Science, Technology and Innovation, serves as a strategic framework to guide AI adoption and innovation in key economic sectors. 'This roadmap outlines comprehensive policies and strategies to drive economic growth and strengthen national competitiveness. Our goal is to position Malaysia as a leading AI hub in Southeast Asia by 2030,' he said. He said that the AI Code of Ethics will serve as a critical guide to ensure responsible and ethical use of AI technologies, especially in sectors such as healthcare, transportation, agriculture, education, public services and SMEs. Meanwhile, he also said that the third initiative, AI Centre of Excellence, aims to accelerate AI integration nationwide. The centre will focus on refining implementation strategies to maximise the impact of AI on both the economy and society. This festival brings together researchers, industry leaders, and policymakers to explore AI's transformative potential through forums, exhibitions, competitions and training sessions.

Malaysia unveils three key initiatives to boost AI ecosystem
Malaysia unveils three key initiatives to boost AI ecosystem

The Sun

time9 hours ago

  • The Sun

Malaysia unveils three key initiatives to boost AI ecosystem

JOHOR BAHRU: The Ministry of Digital has outlined three key initiatives aimed at strengthening Malaysia's artificial intelligence (AI) ecosystem, in a move to position the country as a regional hub for AI innovation by 2030. Deputy Digital Minister Datuk Wilson Ugak Kumbong, said these initiatives include the National Artificial Intelligence (AI) Roadmap (2016–2030), AI Code of Ethics, and the establishment of a dedicated AI Centre of Excellence. Speaking at the Artificial Intelligence and Robotics Festival 2025 (AirFest 2025) at Universiti Teknologi Malaysia (UTM), Ugak said the AI Roadmap, originally developed under the Ministry of Science, Technology and Innovation, serves as a strategic framework to guide AI adoption and innovation in key economic sectors. 'This roadmap outlines comprehensive policies and strategies to drive economic growth and strengthen national competitiveness. Our goal is to position Malaysia as a leading AI hub in Southeast Asia by 2030,' he said. He said that the AI Code of Ethics will serve as a critical guide to ensure responsible and ethical use of AI technologies, especially in sectors such as healthcare, transportation, agriculture, education, public services and SMEs. Meanwhile, he also said that the third initiative, AI Centre of Excellence, aims to accelerate AI integration nationwide. The centre will focus on refining implementation strategies to maximise the impact of AI on both the economy and society. This festival brings together researchers, industry leaders, and policymakers to explore AI's transformative potential through forums, exhibitions, competitions and training sessions.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store