
AI puts 600,000 jobs at risk but opens new roles, says Malaysia's HR minister
ARTDO International Conference
, Sim said AI could unlock thousands of new employment opportunities, with over 60 emerging job roles already identified, 70% of them in the AI and tech sector.
A recent ministry-commissioned study revealed that 600,000 existing jobs are "at risk" due to AI, though not necessarily lost. 'Some may become obsolete, but most will be reshaped, demanding urgent reskilling and upskilling,' Sim said.
He urged a shift from 'worry to strategy,' stressing that Malaysia must equip its workforce with AI-ready skills. New job roles such as prompt engineers are emerging, requiring not just technical expertise but also oversight of AI-generated outputs. Sim emphasised two key skill pillars: high-level AI proficiency for managing or developing AI systems, and broad AI literacy for everyday users.
To support this, the MyMahir portal is helping Malaysians align their training with future-ready skills. Sim also highlighted the need for clear ethical and legal frameworks to guide AI's development responsibly.
'This is not just about technology, it's about values, regulation, and inclusive growth,' he concluded, reinforcing the ministry's commitment to balancing innovation with workforce readiness.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
13 minutes ago
- Hindustan Times
ChatGPT generates drug-use plan, writes suicide note for…, warning issued
A new report by the Center for Countering Digital Hate (CCDH) has raised serious concerns about how ChatGPT responds to prompts from users posing as vulnerable teenagers. According to the Associated Press, researchers found that the AI chatbot generated harmful responses, including suicide notes, drug-use plans, and self-harm advice, when interacting with fictional 13-year-olds. Unlike search engines, ChatGPT synthesises responses, often presenting complex, dangerous ideas in a clear, conversational tone.(AP) Alarming findings The report, based on over three hours of interactions between ChatGPT and researchers simulating distressed teens, claims that the AI responded with 'dangerous and personalised content' in over half of 1,200 tested prompts. Mobile Finder: iPhone 17 Air expected to debut later this year One of the most disturbing examples involved the chatbot generating three detailed suicide notes for a fictional 13-year-old girl, one each for her parents, friends, and siblings. CCDH CEO Imran Ahmed, who reviewed the output, said: 'I started crying.' He added that the AI's tone mimicked empathy, making it appear like a 'trusted companion' rather than a tool with guardrails. Harmful content generated Some of the most concerning responses included: • Detailed suicide letters • Hour-by-hour drug party planning • Extreme fasting and eating disorder advice • Self-harm poetry and depressive writing Researchers noted that safety filters were easily bypassed simply by rephrasing the prompt, such as saying the information was 'for a friend.' The chatbot does not verify a user's age, nor does it request parental consent. Why this matters Unlike search engines, ChatGPT synthesises responses, often presenting complex, dangerous ideas in a clear, conversational tone. CCDH warns that this increases the risk for teens, who may interpret the chatbot's replies as genuine advice or support. Ahmed said, 'AI is more insidious than search engines because it can generate personalised content that seems emotionally responsive.' OpenAI responds While OpenAI has not specifically commented on the CCDH report, a spokesperson told the Associated Press that the company is 'actively working to improve detection of emotional distress and refine its safety systems.' OpenAI acknowledged the challenge of managing sensitive interactions and said enhancing safety remains a top priority. The bottom line The report underlines a pressing issue as AI tools become more accessible to children and teens. Without robust safeguards and age verification, platforms like ChatGPT may inadvertently put vulnerable users at risk, prompting urgent calls for improved safety mechanisms and regulatory oversight.


The Hindu
an hour ago
- The Hindu
Nearly 60% of Indian organisations lack AI governance policy: Report
Nearly 60% of Indian organisations either lack an artificial intelligence (AI) governance policy or are still in the process of developing one, per a new IBM report. The report titled 'Cost of a Data Breach Report 2025' highlights a worrying gap between rapid AI adoption and lagging security controls, raising fresh concerns about the strategic readiness of Indian enterprises to handle AI-related cyber threats. IBM's report also revealed that the average cost of a data breach in India has reached an all-time high of ₹220 million in 2025, marking a 13% increase over last year's figure of ₹195 million. The surge reflects rising cyber risks across industries, particularly as organisations integrate AI tools without corresponding investments in governance and access control. Globally, IBM noted that while AI adoption is booming, it is increasingly outpacing security protocols, making ungoverned AI systems attractive targets for threat actors. In India, only 37% of surveyed organizations have implemented AI access controls, and just 42% reported having policies to manage or detect 'shadow AI' — the unauthorised use of AI tools and applications. Notably, shadow AI emerged as one of the top three cost drivers of breaches, adding an average of INR 17.9 million to the total impact. Despite this, most organisations have yet to adopt dedicated safeguards to monitor or contain these hidden vulnerabilities. Phishing continues to be the most common cause of data breaches in India, accounting for 18% of incidents. This is followed closely by third-party and supply chain compromises (17%), and vulnerability exploitation (13%). Among industries, the research sector bore the highest average breach cost at ₹289 million, followed by transportation (₹288 million) and industrial organizations (₹264 million). Despite evidence that AI-driven security tools can more than halve breach costs, 73% of Indian organizations surveyed reported limited or no use of AI-based security automation.


India Today
an hour ago
- India Today
Meta contractors review private AI chats, sometimes seeing user names and photos: Report
Some conversations you've had with Meta's AI may not have been as private as you thought. According to a report by Business Insider, contract workers hired to train Meta's AI systems have reviewed thousands of real user chats, and in many cases, those conversations included names, email addresses, phone numbers, selfies, and even explicit images. Four contractors told the publication that they were regularly exposed to personal information while working on Meta AI projects. These individuals were reportedly hired through platforms called Outlier (owned by Scale AI) and Alignerr. The projects they worked on aimed to improve the quality and personalisation of Meta's AI responses, a process that involves reviewing real interactions between users and AI-powered said they often came across highly personal conversations, ranging from therapy-like sessions and rants about life, to flirty or romantic exchanges. One worker claimed that up to 70 per cent of the chats they reviewed included some form of personally identifiable information. Some users reportedly sent selfies or explicit images to the chatbot, believing the conversation to be seen by Business Insider reportedly showed that in some cases, Meta itself provided background user data, like names, locations, or hobbies, to help the AI personalise responses. In other cases, users voluntarily gave up this information during conversations, despite Meta's privacy policy warning users not to share personal details with the chatbot. One particularly concerning example described in the report involved a sexually explicit conversation with enough personal information for the reporter to locate a matching Facebook profile within which owns platforms like Facebook and Instagram, acknowledged that it does review user interactions with its AI. A spokesperson told Business Insider that it has "strict policies" governing who can access personal data and that contractors are instructed on how to handle any information they may come across. 'While we work with contractors to help improve training data quality, we intentionally limit what personal information they see,' the spokesperson reportedly contractors said projects run by Meta exposed more unredacted personal data than those of other tech companies. One project called Omni, run by Alignerr, aimed to boost engagement on Meta's AI Studio. Another project called PQPE, operated via Outlier, encouraged AI responses to reflect user interests pulled from past conversations or social isn't the first time Meta has come under scrutiny for its data practices. The company's history includes the 2018 Cambridge Analytica scandal and multiple reports over the years about contractors listening in on voice recordings without proper safeguards. While reviewing AI conversations with human help is common in the tech industry, Meta's track record has raised added concern. - EndsTrending Reel