logo
Bernama Organises AI Training For Media Practitioners In S'wak

Bernama Organises AI Training For Media Practitioners In S'wak

Barnama28-04-2025

GENERAL
KUCHING, April 28 (Bernama) – The Malaysian National News Agency (Bernama) is organising a two-day training programme, AI For Journalism: Transcription & Smart Tools, beginning today.
The artificial intelligence (AI) technology training programme, held at the Centre for Technology Excellence Sarawak (CENTEXS), involves the participation of 30 media practitioners from various agencies in Sarawak.
The programme's instructor, Muhammad Hafiz Abdul Majid, said one of the main objectives of the training was to expedite the transcription process and fact-checking through AI technology.
'We understand that among the tasks of journalists is transcribing audio or video from speeches and press conferences, as well as cross-checking the accuracy of certain facts.
'Through this training, we teach how to use AI to solve their problems. If previously it took up to an hour to transcribe, now with AI, it can be done in just 10 or five minutes,' he told Bernama.
Elaborating further, Muhammad Hafiz said every media organisation must train its staff to remain competitive in a rapidly evolving technological landscape.
'When AI technology was first introduced, many claimed that AI would replace human jobs. However, I disagree, because if used correctly, AI can enhance our capabilities, including productivity,' he said.
The Bernama Centre of Excellence is coordinating the AI for media course under the National Journalists' Day (HAWANA) 2025 allocation.
A similar training programme will be held in Sabah on May 6 and 7.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI inks surprise cloud deal with Google to boost AI power
OpenAI inks surprise cloud deal with Google to boost AI power

New Straits Times

time41 minutes ago

  • New Straits Times

OpenAI inks surprise cloud deal with Google to boost AI power

SAN FRANCISCO: OpenAI plans to add Alphabet's Google cloud service to meet its growing needs for computing capacity, three sources told Reuters, marking a surprising collaboration between two prominent competitors in the artificial intelligence sector. The deal, which has been under discussion for a few months, was finalised in May, one of the sources added. It underscores how massive computing demands to train and deploy AI models are reshaping the competitive dynamics in AI, and marks OpenAI's latest move to diversify its compute sources beyond its major supporter Microsoft, including its high-profile Stargate data centre project. It is a win for Google's cloud unit, which will supply additional computing capacity to OpenAI's existing infrastructure for training and running its AI models, sources said, who requested anonymity to discuss private matters. The move also comes as OpenAI's ChatGPT poses the biggest threat to Google's dominant search business in years, with Google executives recently saying that the AI race may not be winner-take-all. OpenAI, Google and Microsoft declined to comment. Alphabet's stock was up 2.1 per cent on Tuesday afternoon following the news, while Microsoft shares were down 0.6 per cent. Scotiabank analysts called the development "somewhat surprising" in a note on Tuesday, highlighting the growth opportunities for Google's Cloud unit, while expressing caution regarding competition from ChatGPT. "The deal ... underscores the fact that the two are willing to overlook heavy competition between them to meet the massive computing demands. Ultimately, we view this as a big win for Google's cloud unit, but ... there are continued worries that ChatGPT is becoming an incrementally larger threat to Google's search dominance," the analysts wrote. Since ChatGPT burst onto the scene in late 2022, OpenAI has dealt with increasing demand for computing capacity - known in the industry as compute - for training large language models, as well as for running inference, which involves processing information so people can use these models. OpenAI said on Monday that its annualised revenue run rate surged to US$10 billion as of June, positioning the company to hit its full-year target amid booming adoption of AI. Earlier this year, OpenAI partnered with SoftBank and Oracle on the US$500 billion Stargate infrastructure program, and signed deals worth billions with CoreWeave for more compute. It is on track this year to finalise the design of its first in-house chip that could reduce its dependency on external hardware providers, Reuters reported in February. The partnership with Google is the latest of several maneuvers made by OpenAI to reduce its dependency on Microsoft, whose Azure cloud service had served as the ChatGPT maker's exclusive data center infrastructure provider until January. Google and OpenAI discussed an arrangement for months but were previously blocked from signing a deal due to OpenAI's lock-in with Microsoft, a source told Reuters. Microsoft and OpenAI are also in negotiations to revise the terms of their multibillion-dollar investment, including the future equity stake Microsoft will hold in OpenAI. For Google, the deal comes as the tech giant is expanding external availability of its in-house chip known as tensor processing units, or TPUs, which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two OpenAI competitors launched by former OpenAI leaders. Google's addition of OpenAI to its customer list shows how the tech giant has capitalised on its in-house AI technology from hardware to software to accelerate the growth of its cloud business. Google Cloud, whose US$43 billion in sales comprised 12 per cent of Alphabet's 2024 revenue, has positioned itself as a neutral arbiter of computing resources in an effort to outflank Amazon and Microsoft as the cloud provider of choice for a rising legion of AI startups whose heavy infrastructure demands generate costly bills. Alphabet faces market pressure to demonstrate financial returns on its AI-related capital expenditures, which are expected to hit US$75 billion this year, while maintaining its bottom line against the threat of competing AI offerings, as well as antitrust enforcement. Google's DeepMind AI unit also competes directly with OpenAI and Anthropic in a race to develop the best models and integrate those advances into consumer applications. Selling computing power reduces Google's own supply of chips while bolstering capacity-constrained rivals. The OpenAI deal will further complicate how Alphabet CEO Sundar Pichai allocates the capacity between the competing interests of Google's enterprise and consumer business segments. Google already lacked sufficient capacity to meet its cloud customers' demands as of the last quarter, Chief Financial Officer Anat Ashkenazi told analysts in April. Although ChatGPT holds a large lead over Google's competing chatbot in terms of monthly users and analysts have predicted it could reduce Google's dominant search market share, Pichai has brushed aside concerns that OpenAI will usurp Google's business dominance. --REUTERS

PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis
PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis

The Sun

timean hour ago

  • The Sun

PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis

HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI conferences. Videos, especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long videos. Prof. Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models face. Another core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single model. VideoMind is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size models. Prof. Chen said, 'Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process.' AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI models. Prof. Chen added, 'VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more.'

Over 150,000 Visitors Expected At TVET Day, 5,000 Jobs On Offer
Over 150,000 Visitors Expected At TVET Day, 5,000 Jobs On Offer

Barnama

time4 hours ago

  • Barnama

Over 150,000 Visitors Expected At TVET Day, 5,000 Jobs On Offer

10/06/2025 05:24 PM PUTRAJAYA, June 10 (Bernama) -- The National Technical and Vocational Education and Training (TVET) Day 2025 celebration is expected to attract over 150,000 visitors, with more than 5,000 job opportunities offered through career matching with employers from various strategic sectors, said Datuk Seri Dr Ahmad Zahid Hamidi. In a statement issued today after chairing the Special Coordination Meeting for National TVET Day 2025, the Deputy Prime Minister said the celebration brings together broad participation from various government agencies, TVET institutions, industries and the general public, aiming for a direct impact on human capital development and employment opportunities. The three-day event will begin tomorrow at the Melaka International Trade Centre (MITC) in Ayer Keroh, and Prime Minister Datuk Seri Anwar Ibrahim is scheduled to officially launch it on Thursday. 'The official launch on June 12 is expected to include several strategic announcements related to the agenda of strengthening national TVET. Key focus areas will include efforts to encourage wider participation of TVET graduates in high-impact entrepreneurship sectors, in line with current economic needs. 'This step is hoped to further strengthen TVET's role as a driver of growth in high-potential sectors,' said Ahmad Zahid, who is also the Minister of Rural and Regional Development. With the theme "TVET: The Career of Choice", he said the celebration symbolises the MADANI government's commitment to making TVET a core pillar of national development and reinforces its dedication to mainstreaming the skills pathway as part of the country's progressive, inclusive and sustainable Malaysia MADANI aspirations. He said the event will be implemented comprehensively and with high impact, featuring various inclusive, interactive and strategic activities aimed at bringing the public closer to the world of TVET. According to him, the National TVET Day 2025 celebration will consist of four main segments, including the TVET Convention Hub, TVET Job Fair, and TVET Challenge. 'The celebration is expected to take place in a positive atmosphere, with growing public interest in the skills pathway,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store