
MDA announces new leadership for 2025–2026 term amid industry evolution
MDA new leadership 2025-2026: (From left) Nicholas Sagau, Pranav, Wang and Cindy. MDA is reinforcing its commitment to keeping pace with the rapidly changing digital media and marketing landscape.
KUALA LUMPUR: The Malaysian Digital Association (MDA) has shared key leadership updates for the 2025–2026 term, reinforcing its commitment to keeping pace with the rapidly changing digital media and marketing landscape, driven by AI's arrival and shifting global economic and geopolitical factors impacting digital businesses.
REV Media Group chief operating officer Nicholas Sagau Tony Ngimat has been appointed MDA's new president.
At the same time, GroupM Malaysia media practices head Pranav Jha has been elected its vice president.
The appointments come following the recent promotion of the former president Eileen Ooi to PHD Asia Pacific (APAC) chief executive officer. This significant achievement underscores Malaysia's influence in the regional digital arena.
In addition, MDA has appointed IPG Mediabrands chief digital officer Cindy Eliza Vaz as the new treasurer. With her extensive experience in data-driven marketing and media solutions, she will bring valuable expertise to the leadership team.
Lydia Wang, who was recently promoted to Star Media Group Bhd (SMG) chief operating officer, will continue to serve as MDA's secretary.
"The digital landscape is at a critical juncture, with AI set to disrupt the ecosystem like previous technologies have reshaped the digital world.
'It's now more crucial than ever for MDA to act as a facilitator, driving collaboration and innovation,' said Nicholas Sagau.
Pranav added: 'I am honoured and excited to take on this new role at MDA. It's been a very enriching journey for me over the last three years, being part of this convergence of Ad agencies, publishers, advertisers and tech companies.
'I am looking forward to strengthening MDA's promise of nurturing talents, shaping digital and tech best practices, driving new cross-industry initiatives and recognising the best work of our people.'
The updated MDA Council for the 2025–2026 term includes key industry players such as Astro Media Solutions, Dentsu Holdings (M) Sdn Bhd, FCB Shout, REV Media Group, iMedia Asia Sdn Bhd, PETRONAS, Sinar Karangkraf Sdn Bhd, SMG, Vivaki Malaysia and Xamble Group Limited.
For more information about MDA and its initiatives, visit the official website.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
an hour ago
- New Straits Times
OpenAI inks surprise cloud deal with Google to boost AI power
SAN FRANCISCO: OpenAI plans to add Alphabet's Google cloud service to meet its growing needs for computing capacity, three sources told Reuters, marking a surprising collaboration between two prominent competitors in the artificial intelligence sector. The deal, which has been under discussion for a few months, was finalised in May, one of the sources added. It underscores how massive computing demands to train and deploy AI models are reshaping the competitive dynamics in AI, and marks OpenAI's latest move to diversify its compute sources beyond its major supporter Microsoft, including its high-profile Stargate data centre project. It is a win for Google's cloud unit, which will supply additional computing capacity to OpenAI's existing infrastructure for training and running its AI models, sources said, who requested anonymity to discuss private matters. The move also comes as OpenAI's ChatGPT poses the biggest threat to Google's dominant search business in years, with Google executives recently saying that the AI race may not be winner-take-all. OpenAI, Google and Microsoft declined to comment. Alphabet's stock was up 2.1 per cent on Tuesday afternoon following the news, while Microsoft shares were down 0.6 per cent. Scotiabank analysts called the development "somewhat surprising" in a note on Tuesday, highlighting the growth opportunities for Google's Cloud unit, while expressing caution regarding competition from ChatGPT. "The deal ... underscores the fact that the two are willing to overlook heavy competition between them to meet the massive computing demands. Ultimately, we view this as a big win for Google's cloud unit, but ... there are continued worries that ChatGPT is becoming an incrementally larger threat to Google's search dominance," the analysts wrote. Since ChatGPT burst onto the scene in late 2022, OpenAI has dealt with increasing demand for computing capacity - known in the industry as compute - for training large language models, as well as for running inference, which involves processing information so people can use these models. OpenAI said on Monday that its annualised revenue run rate surged to US$10 billion as of June, positioning the company to hit its full-year target amid booming adoption of AI. Earlier this year, OpenAI partnered with SoftBank and Oracle on the US$500 billion Stargate infrastructure program, and signed deals worth billions with CoreWeave for more compute. It is on track this year to finalise the design of its first in-house chip that could reduce its dependency on external hardware providers, Reuters reported in February. The partnership with Google is the latest of several maneuvers made by OpenAI to reduce its dependency on Microsoft, whose Azure cloud service had served as the ChatGPT maker's exclusive data center infrastructure provider until January. Google and OpenAI discussed an arrangement for months but were previously blocked from signing a deal due to OpenAI's lock-in with Microsoft, a source told Reuters. Microsoft and OpenAI are also in negotiations to revise the terms of their multibillion-dollar investment, including the future equity stake Microsoft will hold in OpenAI. For Google, the deal comes as the tech giant is expanding external availability of its in-house chip known as tensor processing units, or TPUs, which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two OpenAI competitors launched by former OpenAI leaders. Google's addition of OpenAI to its customer list shows how the tech giant has capitalised on its in-house AI technology from hardware to software to accelerate the growth of its cloud business. Google Cloud, whose US$43 billion in sales comprised 12 per cent of Alphabet's 2024 revenue, has positioned itself as a neutral arbiter of computing resources in an effort to outflank Amazon and Microsoft as the cloud provider of choice for a rising legion of AI startups whose heavy infrastructure demands generate costly bills. Alphabet faces market pressure to demonstrate financial returns on its AI-related capital expenditures, which are expected to hit US$75 billion this year, while maintaining its bottom line against the threat of competing AI offerings, as well as antitrust enforcement. Google's DeepMind AI unit also competes directly with OpenAI and Anthropic in a race to develop the best models and integrate those advances into consumer applications. Selling computing power reduces Google's own supply of chips while bolstering capacity-constrained rivals. The OpenAI deal will further complicate how Alphabet CEO Sundar Pichai allocates the capacity between the competing interests of Google's enterprise and consumer business segments. Google already lacked sufficient capacity to meet its cloud customers' demands as of the last quarter, Chief Financial Officer Anat Ashkenazi told analysts in April. Although ChatGPT holds a large lead over Google's competing chatbot in terms of monthly users and analysts have predicted it could reduce Google's dominant search market share, Pichai has brushed aside concerns that OpenAI will usurp Google's business dominance. --REUTERS


The Sun
an hour ago
- The Sun
PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis
HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI conferences. Videos, especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long videos. Prof. Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models face. Another core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single model. VideoMind is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size models. Prof. Chen said, 'Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process.' AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI models. Prof. Chen added, 'VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more.'


Malay Mail
15 hours ago
- Malay Mail
PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis
A research team led by Prof. Changwen Chen, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, has developed a novel video-language agent VideoMind that allows AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-LoRA strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size Chen said, "Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process."AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI Chen added, "VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more."Hashtag: #PolyU #AI #LLMs #VideoAnalysis #IntelligentSurveillance #VideoSearch The issuer is solely responsible for the content of this announcement.