logo
How prepared are we for an AI-first future?

How prepared are we for an AI-first future?

The Star6 days ago
THE impact of artificial intelligence (AI) on the labour market and economy as a whole is a topic of debate. It is both a disruptor and collaborator.
Technological revolution, enabled by AI has the potential to reshape the labour market, shape the nature of work, job roles, the skills required to thrive, the available opportunities and employment dynamics across various industries.
With AI advancing at a rapid pace, the labour market and workforce will undergo significant transformation unlike any before.
The impact of AI on the labour market is multifaceted and complex; while it enhances productivity and increases process efficiency as well as creates opportunities, it comes with threats and challenges.
AI has the potential to automate tasks, enhance the decision-making process, and create new job opportunities in fields such as data analytics, machine learning and AI development.
Nevertheless, it raises concerns about jobs displacement, particularly in industries that rely heavily on routine and repetitive tasks; skill polarisation; and ethics.
Many established organisations and institutions have predicted the likelihood that jobs will be replaced by AI.
The findings and survey are striking: The United Nations Trade and Development estimated that up to 40% of all jobs could be affected by AI.
The International Monetary Fund analysis showed that almost 40% of global employment is exposed to AI, with about 60% of jobs may be impacted in advanced economies, and in emerging markets and low-income countries, by contrast, AI exposure is expected to be at 40% and 26%, respectively.
According to a report by the World Economic Forum, by 2025, AI will have displaced 75 million jobs globally, but will have created 133 million new jobs.
This means that there will be a net gain of 58 million jobs globally.
A report by McKinsey & Co says that AI is expected to create 20 million to 50 million new jobs globally by 2030.
These new jobs will be in a range of industries, including healthcare, manufacturing and finance.
AI is already spreading through the Malaysian economy.
The scale of its impacts is potentially significant. The uptake of AI technologies could occur in waves and at different phases of the AI development.
While a different forms of AI-powered tools and applications could be deployed across different occupations and sectors, there could be push-back by those whose businesses or employment are disrupted by the AI-based innovations.
Malaysia TalentCorp's impact study indicated that approximately 1.8 million Malaysian employees are expected to be affected by the transition to AI, digitalisation and green economy, with varying levels of exposure.
Of this, around 620,000 workers (18%) are projected to be highly impacted within the next three to five years, while another 1.2 million (35%) are expected to face moderate impact.
Together, these groups account for roughly 53% of the 3.5 million skilled and semi-skilled employees who are directly engaged in the core operations across key sectors.
The study focusses on 10 key sectors chosen for their significant contributions to Malaysia's gross development product, and they were aerospace, Chemicals, electrical and electronics, energy and power, food manufacturing and cService, global business services, ICT, medical devices, pharmaceutical manufacturing and wholesale and retail trade.
The Ipsos AI Monitor 2025 revealed that 62% of adults in Malaysia in 2024 think that AI will replace their current job in the next five years.
According to the Human Resources Ministry response in Parliament, between 2020 and September 2024, a total of 293,639 workers in Malaysia have already lost their jobs due to automation and AI.
The impact of AI on the workforce will vary by industry and state.
The transition to AI's transformative force in the labour market isn't seamless. Policymakers, society, businesses, employers, and employees must adapt to the changes and disruptions AI generates, ensuring its challenges are responsibly addressed.
Policymakers should proactively prepare their workforce and labour markets so that they are equipped to ride on the AI's disruption through education, training and skills development and adaptable labour market policies.
So, what roles should the Malaysian government play at this stage of AI uptake and development?
The government must act now and prepare our workforce for a more radical future. We should outline the following action plans to prepare our students, graduates and youth as well as workforce for adapting to the AI era.
> It begins with the integration of AI in our education system to future-proof Malaysia's next generation of workforce.
Schools curriculum must focus on AI-complementary skills, adapting to the changing landscape of AI, incorporating AI literacy, digital skills and critical thinking skills into the education system.
The application of AI-powered tools is being used for adaptive learning platforms, automate elements of lesson learning and grading, and intelligent tutoring systems as well as personalise the learning experience.
China will integrate AI applications into teaching efforts, textbooks and the school curriculum as it moves to overhaul education.
Integrating AI into Technical and Vocational Education and Training and apprenticeship programmes offer a powerful way to enhance learning, improve employability, and prepare students for the reality of the modern workplace.
> The most essential five skills that Malaysians must be equipped to survive and thrive in this dynamic, technology-infused work landscape are analytical skills, functional and job-specific skills, project management skills, creativity and innovation skills, and advanced digital skills.
> Combining research and development tax credits, research and development grants and a free, open-source products can support small and medium enterprises (SMEs) to adopt AI in their business operation, including incentivise a broader uptake of AI training assistants to help increase process efficiency.
AI assistants can significantly boost the productivity and performance of lower-performing workers by automating repetitive tasks, providing data-driven insights, and offering personalised learning opportunities
Dedicated public institutions can significantly facilitate the adoption and spread of AI within firms.
Firms have cited uncertainty over the return on AI investment as a critical obstacle for considering adopting AI.
These institutions can help SMEs find the information and give advice and guidance for the adoption of AI, for instance, provide guidelines or a framework to help SMEs navigate the vendor selection process, as well as create networking and collaborative platforms between public and private sectors to help businesses build AI capabilities.
> Encourage the broad adoption of AI across all firms, regardless of size through the development of a new AI-pathfinder or AI Accelerator programme – a structured initiative designed to support and accelerate the growth of startups and businesses focused on AI technologies.
These programmes typically offer mentorship, technical support, access to resources, and networking opportunities to help participants develop and scale their AI-driven products and services.
We can draw on the successful schemes in Singapore and Germany supporting business transformation through a new AI-powered personalised service.
For example, the Monetary Authority of Singapore (MAS) Pathfinder Programme for financial sector AI adoption, is a collaborative initiative between MAS and the financial industry that fosters knowledge exchange in AI implementations.
Participating financial institutions share their experience implementing AI solutions while also gaining insights from the collective experiences of their peers.
> An interactive labour market system to create early awareness and opportunity that helps our workforce to reskilling and upskilling equipped with required new skillset to better handle AI-driven task and take up new job opportunities created by AI.
This system provides real-time analysis of demand and supply of AI jobs, including the job roles that will be displaced by AI, the industries that will create new source of employment, what skills that will be in demand, and also which training courses that are suitable.
> Upskilling and reskilling programmes critical for facilitating workers transition to new employment opportunities, and equip individuals with the necessary skills to thrive in an AI-enhanced economy.
What is important is that the workers receive the right training and skills for transformation development.
The government and the private sector can collaborate to identify emerging skills gaps and develop targeted training programmes.
Lee Heng Guie is executive director of the Socio-Economic Research Centre. The views expressed here are the writer's own.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI will not replace jobs, but redefine them
AI will not replace jobs, but redefine them

New Straits Times

time26 minutes ago

  • New Straits Times

AI will not replace jobs, but redefine them

GEORGE TOWN: Artificial Intelligence (AI) is not the threat to employment it is often made out to be, instead, it is rapidly transforming Malaysia's job landscape — not by replacing human workers, but by creating new roles and redefining existing ones. Human Resources Minister Steven Sim Chee Keong said that as Malaysia accelerates its transition into a digital economy, the public must shed the fear that AI will lead to widespread job losses. "We must move away from the fear that AI is here to take away jobs. "The evidence shows that AI is transforming jobs, not eliminating them. Those who adapt and upskill will find more opportunities, not fewer," he told newsmen here after the launch of the 52nd ARTDO International Conference. Elaborating, Sim said a national labour market study commissioned by his ministry late last year revealed that up to 60 per cent of employers across 10 key economic sectors expected AI to lead to job creation, particularly in areas requiring advanced digital skills. The study, which included extensive inputs from industry stakeholders, is now informing national policy on workforce readiness. "AI is changing how we work, not eliminating the need for work itself. "The question is not whether AI will take over, but whether we are preparing Malaysians to work with AI," he added. Sim said the ministry was actively responding to these changes by expanding training and upskilling initiatives through platforms such as MyMahir, a national skills-matching portal. He said under the ministry's initiative, over 33 government-run industrial training institutes and various private providers were now offering AI and digital literacy courses. He said these courses are aimed at equipping the workforce with relevant competencies in an increasingly automated landscape. "With the right upskilling, workers can transition into higher-value roles rather than being displaced. "AI literacy must become mainstream. Not everyone will become an AI engineer, but basic familiarity with how AI works will soon be essential across nearly every profession," he said. He also said that his ministry was working to integrate AI ethics and governance into public awareness, recognising that responsible deployment was just as important as technical know-how. During the recent National Training Week, between 40 and 50 per cent of offerings had AI components, including modules on ethical usage, data protection and digital responsibility. "We want Malaysians to not only use AI but to use it responsibly. "Understanding the social, legal and moral implications of AI is a national priority. We are building not just a skilled workforce, but a trusted digital society," he said.

Balancing screen time and safety: The challenge for today's parents
Balancing screen time and safety: The challenge for today's parents

The Star

timean hour ago

  • The Star

Balancing screen time and safety: The challenge for today's parents

At what age did you get your first smartphone or sign up for social media? For many adults, it likely happened in their late teens or early twenties, but for kids today, their dive into the digital world often comes much ­earlier, at times even before they start schooling. Countless concerns have been raised on whether children are being exposed to too much, too soon, and the potentially detrimental long-term effects that may come alongside it. Countries like Australia and France have taken a hardline stance on the matter, with France passing a parental consent law for users under 15, and Australia's under-16 ban is set to be enforced in December. French President Emmanuel Macron has even said that the country would impose a further blanket ban on social media use for those under 15, should progress at the EU level to limit teenage screen time lag behind. Meanwhile, Communications Minister Datuk Fahmi Fadzil said back in January this year that the country does not currently have any plans to impose a minimum age requirement when it comes to social media access. He later said in March that any move to impose such a restriction would require a thorough analysis of how it could affect access to information and communication among those impacted, as well as the potential ­psychological and developmental implications. He also noted that most platforms have set 13 years old as a minimum age requirement, and said that the Malaysian government would ­monitor their enforcement. According to Siraj, platforms struggle with effectively enforcing age restrictions. — SIRAJ JALIL According to Siraj Jalil, president of the Malaysia Cyber Consumer Association (MCCA), such monitoring is a step forward, but is still not an ­airtight solution to the issue. 'Globally, platforms struggle to enforce age restrictions effectively, often relying on self-declaration mechanisms that are easily circumvented. 'Enforcement tends to be weak unless backed by strong regulatory requirements and technology-based age assurance. 'Malaysia should expect platforms to strengthen their verification ­systems and should complement this with national efforts to raise awareness among parents and children about the importance of respecting these thresholds,' he said. Srividhya Ganapathy, the co-chairperson of the Child Rights Innovation and Betterment (CRIB) Foundation, on the other hand, advocated for a more serious approach, stressing that ­monitoring alone is not enough. 'In practice, children regularly ­create accounts long before the age of 13, often without any real safeguards in place. The so-called enforcement of age restrictions is inconsistent and largely symbolic. 'Once online, children face a range of risks – cyberbullying being one of the most prevalent. Many children are targeted in private messages or group chats, with little visibility or intervention from adults. For some, the ­bullying continues across multiple platforms, and the lack of a clear ­support or reporting pathway means the harm often goes unnoticed and unaddressed. 'We cannot continue to rely on ­platforms to police themselves. Vague promises of monitoring aren't enough. We need enforceable standards, better age verification, and a proactive, not reactive, approach to safeguarding children online. Children's safety should not be left to the goodwill of corporations,' she said. Tech too soon? From the perspective of those like Srividhya, haphazardly setting an age requirement is not the end of the story. She believes that while such age requirements may serve as a benchmark, there needs to be an overarching strategy that includes measures to ­provide age-appropriate education so that kids learn how to engage the digital world via smartphones and social media when the time eventually comes. Srividhya believes that there needs to be an overarching strategy that includes measures to ­provide age-appropriate education so that kids learn how to engage the digital world via smartphones and social media when the time eventually comes. — Photo by Sanket Mishra on Unsplash 'Outright bans often drive children to access technology in secret, without support or protection. 'Instead, we must equip them with the knowledge and confidence to ­navigate digital spaces safely and responsibly. A minimum age should be the starting point, not the solution. 'We also need to acknowledge the realities faced by Malaysian families. Smartphones are no longer luxuries; they are everyday tools for communication, education, and payment. 'Many parents – especially those who are divorced or working full-time – rely on phones to stay connected with their children throughout the day, whether during custody transitions, at daycare or tuition, or while their child is commuting alone,' she said. Srividhya called for guidelines on digital competencies for kids and parental roles. — ART CHEN/The Star Siraj similarly added that while such restrictions may, in theory, ­prevent younger children from ­creating their accounts, they have ­little bearing on children with access to smartphones who can circumvent them to consume content on these platforms. 'In reality, many children under 13 actively use platforms like YouTube and TikTok, even if the accounts are registered under parents or older ­siblings. 'Therefore, while benchmarks are important, the more urgent need is for better education and resources for parents and children to use these technologies responsibly and safely from an early age,' he said. While it's clear that smartphones and social media have a place, ­parental involvement and guidance are crucial, at least according to Allistair Adam Anak Nelson, a ­registered clinical ­psychologist and lecturer at the Taylor's University School of Liberal Arts and Sciences. He noted that research has drawn a link between the excessive use of social media and screen time at an early age with higher levels of ­depression, anxiety, poor sleep, body dissatisfaction and low self-esteem. These concerns are only heightened by exposure to things such as cyberbullying, unrealistic body standards, and the constant need for online ­validation. 'Childhood and early adulthood, roughly from the age of 10 to early 20s, is a time when the brain is still developing, especially in areas related to emotion regulation, self-control, and social awareness. 'This makes young people more vulnerable to the emotional highs and lows of social media. 'Age restrictions merely delay the exposure to age-inappropriate or ­harmful content, as many children can easily bypass them by entering false birthdates. Allistair Adam said thoughtful consideration is needed to develop healthy screen use. — Taylor's University 'Restrictions alone do not address the need for children to learn how to navigate the digital space safely and responsibly,' he said, further stressing the need for digital literacy. Allistair Adam added that it could come in the form of screen time boundaries set by parents, co-viewing content, open communication on media use, and modelling healthy ­digital habits. 'Children need more than just the ability to use a device – they must understand privacy, recognise safe content, and distinguish between reality and fantasy.' Meanwhile Raihan Munira Moh Sani, a lecturer with the Asia Pacific University of Technology and Innovation's School of Psychology (APU), said that these technologies should not be seen as being inherently harmful. 'When assessing a child's readiness for a smartphone or social media access, it is essential to look beyond age and consider developmental ­indicators. 'One important factor is social awareness and empathy, where children should be able to understand the impact of their words and actions on others and demonstrate respectful behaviour in both online and offline peer interactions. 'Equally important is their understanding of boundaries. This includes knowing what is appropriate to share online, recognising the importance of privacy, and being aware of screen time limits. 'These indicators reflect a child's ability to navigate digital spaces responsibly and safely,' she said. Clicking into childhood From Allistair Adam's point of view, there are no hard and fast rules on the 'right' age when it comes to children using things like smartphones, social media, or even engaging in online games. 'Often, smartphones are given to children as a digital pacifier to keep them calm or preoccupied, especially in public settings. 'While this may offer quick relief, developing healthy screen time usage requires more thoughtful consideration,' he said. Raihan Munira advised parents to look beyond age and consider developmental indicators to see if their kids are ready for screen use. — APU There are some guideposts ­available for parents to have a point of reference. For instance, Allistair Adam said that the World Health Organization (WHO) does not ­recommend screen time at all for kids below two years of age, while those aged between two to four should be ­limited to just an hour each day. Raihan Munira, on the other hand, said that the American Academy of Pediatrics recommends that children under 18 months old be kept off screen-based media entirely, except for video chatting. She further said that for children aged 18 to 24 months, any digital ­content should be limited to high-­quality programming viewed together with a parent, while for those aged two to five, screen time should be capped at one hour a day. From age six onwards, parents should set clear, consistent limits on both screen time and content. Vinorra Shaker, the head of the school of psychology at APU, highlighted that Malaysian children are becoming increasingly connected with the digital world, which has turned out to be somewhat of a ­double-edged sword. She said that while Malaysian ­children are generally tech-savvy, with competency in navigating apps, social media platforms, and games, this does not necessarily translate to being able to engage digital spaces safely. 'Compared to children in some developed countries like those in Scandinavia or parts of Western Europe, Malaysian kids often have less structured digital education. 'This means they might be more exposed to online risks such as ­cyberbullying, privacy breaches, or harmful content. A Unicef study even found that while Malaysian youth are confident online, many don't fully understand how to protect themselves from threats. 'The good news is that digital ­literacy programmes are growing in schools, and awareness among ­parents and educators is increasing. From age six onwards, parents should set clear, consistent limits on both screen time and content, said Raihan Munira. — Photo by Vitaly Gariev on Unsplash 'But there's still a gap to close when it comes to teaching children not just how to use technology, but how to use it responsibly and safely,' she said. As Allistair Adam pointed out, 'studies from states such as Kuala Lumpur, Selangor and Kedah show that most preschoolers are already using smartphones and other digital devices regularly'. One of the studies, 'Screen Media Dependency And Its Associated Factors Among Preschool Children In Kuala Lumpur' published in the Malaysian Journal Of Medicine And Health Sciences in May 2023, found that over 65% of preschoolers in Kuala Lumpur show signs of dependence on their devices. Another study, 'Determinants of Excessive Screen Time Among Children Under Five Years Old in Selangor, Malaysia: A Cross-Sectional Study' published in the International Journal Of Environmental Research And Public Health in March 2022, found that over 90% of preschoolers in Selangor exceed recommended screen time limits, underscoring a lack in ­digital maturity among Malaysian children when compared to their peers in countries with structured ­digital ­literacy programmes. 'This gap between access and ­readiness increases their ­vulnerability to digital dependency, cyberbullying, and misinformation. 'It highlights the need for nationwide digital literacy initiatives and child-­focused online safety policies,' he said. Greater guidance Srividhya further called for concrete guidelines from the government that are 'clear, practical, and grounded in the realities of Malaysian families, not just borrowed from other jurisdictions or imposed in a top-down way'. 'There's too much uncertainty. Parents, schools, and even platforms are often left to interpret things for themselves, which leads to inconsistent decisions and, ultimately, children falling through the cracks.' She said that the country does not necessarily need new blanket laws, but rather a framework that provides guidance in the form of minimum standards that also offers some flexibility. This could come in the form of ­general suggestions for platforms based on age, and the specific kinds of digital competencies children need to access them, along with the responsibilities held by parents, ­educators, and platforms, she added. 'It's not just about when a child can go online – it's about how they should be supported when they do. 'Right now, our response to ­children's digital access tends to be reactive. A case goes viral, there's public outcry, and we start talking about bans or surveillance. 'But these approaches don't address the core issue: most children are getting online anyway – often unsupervised and unprepared. 'Without national guidelines that put child rights and child realities at the centre, we're just leaving ­families to figure it out on their own,' Srividhya said. For Allistair Adam, it comes down to ensuring that a child is emotionally prepared to engage with the digital world. 'For parents, assessing readiness involves observing whether the child can regulate their emotions, follow rules, manage screen time without being attached, recognise unsafe or inappropriate content, and communicate openly with their parents on their digital use. 'Children should also be able to balance screen time with other important daily activities such as schoolwork, play and family time. 'Policymakers, on the other hand, can support this by ensuring access to early digital literacy education, ­promoting age-appropriate platform design, and developing national guidelines that safeguard children's digital well-being. 'In the end, readiness is not just whether a child has the skills to ­manage screen time but whether they have received the right modelling, guidance, and support to use digital tools wisely – that should be the best indicator,' he said.

Regulating online fraud: Malaysia's OSB vs China's approach
Regulating online fraud: Malaysia's OSB vs China's approach

The Star

timean hour ago

  • The Star

Regulating online fraud: Malaysia's OSB vs China's approach

INITIATED in the 2010s, China's Digital Silk Road (DSR) aims to enhance digital connectivity across nations through infrastructure, trade, finance, people-to-people exchanges and policy coordination. It presents new commercial opportunities for Malaysia by fostering collaboration between Chinese and Malaysian businesses, strengthening Malaysia's digital economy ecosystem. The convergence of China's DSR, the Covid-19 pandemic, and the rapid rise in artificial intelligence (AI) have significantly accelerated the adoption of digital technologies in business processes and operations within Malaysia. However, while digitalisation offers numerous benefits, it has unfortunately also led to a significant surge in online criminal activity. A recent report by The Star revealed that nearly RM600mil was lost to online fraud from January to March, according to the Bukit Aman Commercial Crime Investigation Department. The report highlighted that criminals are exploiting technological advancements and modern lifestyles, using fake digital identities, fraudulent websites, chatbots and deepfakes to deceive victims. Recognising the escalating threat posed by these sophisticated methods, regulatory interventions focused on online content have become a prevailing trend. Malaysia, in line with this trend, passed the Online Safety Bill (OSB) 2024 in December. It is awaiting official gazettement, and its effective date will be determined by Communications Minister Datuk Fahmi Fadzil. The new law aims to enhance online safety in Malaysia by regulating harmful content and establishing duties and obligations for application service providers (those who provide network services such as Internet access), content application service providers (those who provide content such as broadcasting and video streaming), and network service providers (those providing cellular mobile services and bandwidth services) operating within and outside Malaysia. According to Minister in the Prime Minister's Department (Law and Institutional Reforms) Datuk Seri Azalina Othman Said, it applies to licensed application and content service providers such as Instagram, WhatsApp, TikTok and YouTube. The Bill regulates two categories of content: 'harmful content' and 'priority harmful content'. Its First Schedule defines the scope of harmful content, which includes content on financial fraud, excluding content that promotes awareness or education related to financial fraud. The content on financial fraud listed in the First Schedule is also classified as 'priority harmful content' in the Second Schedule. Key duties imposed on service providers include the obligation to implement measures to detect and mitigate harmful content (Section 13), issue user guidelines (Section 14), establish mechanisms for handling user reports of potentially harmful content (Sections 16 and 17), and prepare an Online Safety Plan (Section 20). For priority harmful content, such as financial fraud, service providers must take steps to prevent user access (Section 19). Despite Fahmi's assurance that the government's intention in introducing the OSB is to serve as a measure to combat crime, such as financial fraud, and not to restrict freedom of expression, critics remain concerned about its potential to restrict freedom of expression and be used as a censorship tool by the government. These are valid concerns, particularly considering the vague definition of harmful content in the OSB and the vast powers conferred on a commission reporting to the Communications Ministry. This commentary limits its discussion to the Bill's role concerning content on financial fraud. It is undeniable that unregulated content involving financial fraud, such as online scams, can damage business reputations and cause financial losses. Deputy Communications Minister Teo Nie Ching reported that RM1.224bil was lost to online crimes and scams in Malaysia within the first 10 months of last year, with many victims falling prey to sponsored advertisements on social media platforms. Indeed, a duty should be imposed on service providers to prevent these platforms from profiting from sponsored advertisements involving financial scams, for which they receive payment to promote products or services. Nevertheless, the provisions in the OSB are still lacking when viewed from the perspective of curbing online fraud. In China, the government passed the Anti-Telecom and Online Fraud Law ('ATOF Law') in September 2022. This law is more specific, aiming to prevent, deter and punish telecommunications and online fraud, strengthen efforts against such fraud, and protect the rights and interests of citizens and organisations. It imposes responsibilities on key businesses in the telecommunications, financial, and Internet sectors to prevent fraud risks. Among these requirements, the law requires service providers to verify users' identities before providing a range of services, such as web hosting, content and software distribution, livestreaming and advertising (Article 21). The law also imposes a duty of reasonable care on service providers to monitor, identify, and address the use of their services to commit fraud (Article 25). The ATOF Law provides a model regulation for targeted fraud prevention. While Malaysia and China operate within different legal and cultural frameworks, there are valuable lessons to be learned from China's experience in addressing online fraud. China's preventive measures have contributed to a more secure digital environment for businesses and consumers, minimising the impact of online fraud. While the Malaysian government's efforts to ensure a secure online environment and build trust in the digital ecosystem are laudable, more targeted provisions focused on the specific mechanics of online fraud would be more efficient, like China's ATOF Law, instead of a broad content regulation approach. China's ATOF Law, which emphasises user verification and platform monitoring for fraudulent activities, would allow for a more focused approach to content regulation. It is imperative that the Malaysian government review the adequacy of the OSB in achieving its intended purpose of curbing online crime. To enhance its efficacy, the government could consider incorporating more granular provisions that mandate specific actions from service providers and establish clear and enforceable guidelines and penalties. A well-regulated online environment, one that builds trust and security, can attract foreign investment and facilitate DSR-related collaborations, shaping the long-term trajectory of Malaysia's digital economy. Lai Chooi Ling is a lecturer at Tunku Abdul Rahman University of Management and Technology (TAR UMT). The views expressed here are entirely the writer's own. The SEARCH Scholar Series is a social responsibility programme jointly organised by the South-East Asia Research Centre for Humanities (SEARCH) and TAR UMT.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store