logo
CNA938 Rewind - Homegrown large language model could be the next Deepseek or ChatGPT

CNA938 Rewind - Homegrown large language model could be the next Deepseek or ChatGPT

CNA05-05-2025

CNA938 Rewind
Local large language model Sea-Lion has gained much attention in South-East Asia – with around 235,000 downloads so far. AI Singapore researchers are now planning to add voice recognition to the programme later this year. Lance Alexander and Daniel Martin find out more with Darius Liu, Head of Partnerships & Strategy, AI Singapore.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Safety measures can spur AI's growth, not stifle it: Panellists
Safety measures can spur AI's growth, not stifle it: Panellists

Straits Times

time28-05-2025

  • Straits Times

Safety measures can spur AI's growth, not stifle it: Panellists

Professor Dawn Song (right) from the University of California, Berkeley at a fireside chat with Professor Simon Chesterman, senior director of AI governance at AI Singapore, at the Asia Tech x Singapore conference on May 28. PHOTO: IMDA SINGAPORE - Safety measures can foster artificial intelligence (AI) innovations by avoiding unintended harm and building public trust, said Professor Dawn Song from the University of California, Berkeley. Likening AI safety measures to seat belts in the early days of driving, she said that these guardrails will not stifle AI innovation in the same way that seat belts do not slow down the development of the automotive industry. Rather, seat belts have fostered more confident driving, contributing to advancements in overall automotive safety and development, she said. Prof Song was speaking at a fireside chat about securing AI's future with science-based safety on May 28 at the Asia Tech x Singapore conference held at Capella Singapore. 'AI safety... can help you to innovate faster... (and also) in a safer way... It is not there to slow things down,' says Prof Song. AI safety was a common theme at the conference, where many speakers acknowledged that its harms have to be minimised either through regulation or some form of global consensus. Past cases of AI biases that caused massive embarrassments and harmed minority groups offered some lessons. A 2016 investigation led by US publication ProPublica found that a criminal justice algorithm used by the US courts wrongly flagged black defendants as high risk for reoffending, nearly twice the rate of white defendants. In 2018, e-commerce giant Amazon scrapped an internal hiring tool after discovering it was biased against women. The system, trained on resumes submitted over a decade - many of which came from men - systematically downgraded applications that included the word 'women's,' such as 'women's chess club.' The tool was never deployed beyond testing. Calling for responsible innovation, Prof Song said: 'We want to innovate. We want to actually make the world a better place. But we can only do that if we can ensure the safety and security of the AI systems we are building and deploying.' Dr Samir Menon, CEO of Dexterity AI, which builds dexterous robot solutions, pointed out that the safety risks with physical AI can be significantly higher compared with AI chatbots. Physical AI refers to hardware that interacts with the real world through sensors and actuators, and includes autonomous vehicles and surgical robots. While software can cause online harms, hardware failures can be disastrous, he said speaking at a panel titled I, Robot - Future of Embodied AI at the same event on May 28. 'Once a robot moves in the real world, it can bump into people, or knock things over or worse,' he said. In 2023, a robot taxi operated by US self-driving company Cruise ran over a pedestrian who had already been hit by another vehicle, dragging her for several metres and causing serious injuries. A year later in South Korea, a delivery robot made by robot delivery service Neubility collided with a moving car at a pedestrian crossing, damaging the vehicle and sparking debate over who should be held accountable when such machines go rogue. Speaking at the same panel, Stanford University's assistant professor of computer science Jiajun Wu noted that there are inconsistencies in the performance of robots in different research settings . Besides, most robots are trained in controlled environments and applying to real-world settings can be challenging, Dr Samir said. 'If you deploy 10 robots and need to cordon off five meters around each of them, your whole building ends up looking like a robot zoo. That's just not viable,' Dr Samir added. True progress, he said, lies in shared spaces. For instance, robots must be taught how to handle tools safely or carry drinks without spilling to safely co-exist with humans. Their training must be done in a real-world setting. 'If we can pull that off in the next four to five years, that's going to be a fantastic step up,' said Dr Samir. To scale AI deployments safely, clear standards are needed. 'If I train an AI on a robot with one type of hand, and then change the hand or switch to two arms, will it still work? Right now, we just don't know,' he said. Prof Song echoed this, noting that AI systems often lack transparency and systematic evaluation. She added that the broader AI policy landscape remains fragmented, with limited consensus on best practices. Most AI firms still prioritise performance over safety, investing relatively little in risk mitigation. Prof Song said that the AI industry can draw lessons from the cybersecurity sector, which is shifting away from treating safety as an afterthought. Now, many systems are built with security measures already built in. Similarly, AI can be designed to be safe at the onset, she said. Join ST's Telegram channel and get the latest breaking news delivered to you.

S'pore doubles down on AI with 800 new training spaces, 500 business projects
S'pore doubles down on AI with 800 new training spaces, 500 business projects

Straits Times

time27-05-2025

  • Straits Times

S'pore doubles down on AI with 800 new training spaces, 500 business projects

Senior Minister of State for Digital Development and Information Tan Kiat How said the digital economy contributes to 18 per cent of the local economy. The Straits Times S'pore doubles down on AI with 800 new training spaces, 500 business projects SINGAPORE – About 800 new training opportunities and as many as 500 new projects to benefit 1,000 enterprises will be rolled out as Singapore doubles down on artificial intelligence (AI), a cornerstone of its digital economy strategy. Both mid-career AI novices as well as seasoned practitioners could have a stab at 400 training places at national programme AI Singapore (AISG) over the next three years, and another 400 training places made available by companies ranging from AWS and Oracle to Microsoft and Singtel. The new places will add to Singapore's current pool of over 6,000 AI professionals, said the Infocomm Media Development Authority (IMDA) at the start of its three-day ATX Enterprise 2025 conference, held under its annual cluster of Asia Tech x Singapore events. Making the opening address on May 27 to about 200 delegates at Singapore Expo, Senior Minister of State for Digital Development and Information Tan Kiat How said the digital economy contributes to 18 per cent of the local economy. And the country continues to draw investments. To date, at least 26 AI Centres of Excellence have been set up by organisations to drive AI innovation activities. These centres are often hubs for experimentation, training and sandboxing. Mr Tan said: 'Despite the global uncertainty, we expect technology, especially AI, to continue to drive quality economic growth.' 'As companies expand their AI teams and deepen their capabilities in Singapore, they will require more practitioners with AI expertise.' About 300 of the new 800 training places will be offered over the next two years through an enhanced AI Apprenticeship Programme (AIAP) under AISG. Called AIAP Industry, the six-month curriculum will focus on practical industry needs. Since the programme started in 2018 to groom local AI talent, more than 410 graduates over 16 cohorts have been trained. Over 90 per cent of its trainees were hired after graduation, AISG told The Straits Times. One of its apprentices, Mr Jerald Han, 33, left his job as a deputy director at HDB to join the AI Apprenticeship in 2024. He is now a natural language processing engineer at local AI unicorn Patsnap. He said: 'I had an engineering background but very little software engineering knowledge... AIAP definitely gave me the boost I needed as it covered many industry-relevant and practical skills, such as the importance of writing readable code, machine learning operations concepts.' AISG will also start the Pinnacle AI Industry Programme (PAIP), which will train 100 local AI practitioners into 'expert model builders' over the next three years. Companies may nominate their AI-functional employees for the six-month programme. These AI professionals will get hands on in various stages of a large language model development life cycle, including data management, model training and development, and work on AISG's regional-focused model, Sea-Lion. Singapore is on track to triple its pool of tech talent over five years to 15,000 by 2028, boosted by a 25 per cent jump in the past year through various initiatives with schools and skills upgrading programmes, said IMDA. More than 20,000 locals have been helped into tech jobs and 320,000 individuals have picked up tech skills, the agency added. Enterprises, however, are wishing for a quicker pace, according to a survey in April of 350 local companies commissioned by global payroll firm Deel. It shows that only 12 per cent of small and medium-sized enterprises are in the intermediate stage of using AI, citing a shortage of AI talent. About 47 per cent of its respondents say the local AI talent pool is insufficient to meet business needs. Among the recruitment hurdles, 51 per cent noted high salary expectations, and 47 per cent cited skills mismatch. To plug their immediate needs, 62 per cent of the firms were open to hiring talent from abroad. Referring to the findings, Mr Nick Catino, Deel's global head of policy, said: 'Talent remains the single biggest barrier to scaling AI. Cross-border hiring and remote work offer Singapore businesses access to global expertise, but this expertise must be harnessed to empower and elevate local teams.' For enterprises, Mr Tan announced that another 1,000 firms will get support such as funding and guidance for up to 500 new AI projects in the next 12 months under an expanded Gen AI for Digital Leaders initiative. The programme to help firms adopt AI has been used by more than 200 enterprises for 50 projects as at April. One of these firms, White restaurant, integrated its human resource management system to combine employees' attendance, pay slips, leave and claim applications onto an app. It also introduced an AI bot using IMDA's Chief-Technology-Officer-as-a-Service (CTO-as-a-Service) programme. White's director, Ms Laureen Tan, said: 'Grant application was easy to understand... The vendor also provided hands-on training and support and within a short period of time, our team members adapted to the system.' Mr Tan noted that another three companies – Alibaba Cloud, ST Engineering and Prudential Singapore – have made commitments to help local firms. Alibaba Cloud pledged to help up to 3,000 SMEs and digital solution providers in cloud technologies and AI, while ST Engineering will provide free cyber threat scanning services for up to 2,000 SMEs . Prudential Singapore committed up to 10 gen AI tech explainer videos as part of its pledge. These firms join eight partners – AWS, DBS, Google, Microsoft, Salesforce, SGTech, Singapore Business Federation and Singapore Computer Society – whose pledges of support in 2024 helped 10,000 enterprises, noted Mr Tan. As part of Singapore's efforts to to boost South-east Asia's AI cooperation, AISG is also launching the first pan-South-east Asia AI developer challenge to build AI solutions customised for the region. Singapore is committed to driving impactful digital innovation, alongside like-minded stakeholders, said Mr Tan. Closing his address, he said: 'Singapore's value lies not just in our capabilities, but in our consistency – in being a partner you can count on, even when the world is less certain.' Join ST's Telegram channel and get the latest breaking news delivered to you.

GenAI in resume writing, job assessments: Fair use or foul play?
GenAI in resume writing, job assessments: Fair use or foul play?

Singapore Law Watch

time26-05-2025

  • Singapore Law Watch

GenAI in resume writing, job assessments: Fair use or foul play?

GenAI in resume writing, job assessments: Fair use or foul play? Source: Straits Times Article Date: 26 May 2025 Author: Megan Wee Not all employers are comfortable with candidates' use of AI tools at this point. Like many of his peers, final-year economics and data science major Jonathan Chan (not his real name) had been applying for jobs ahead of graduation in July. One of the roles the 25-year-old applied for was with a local bank. As the position required computing skills, he had to complete a timed coding assessment as part of the application. What caught him off-guard when the test began was a pop-up that flashed across the screen – notifying him that the use of generative artificial intelligence (GenAI) tools was barred and that his eye movement would be tracked to ensure adherence. Fortunately, the restriction did not stop him from completing the test and landing the role, although he was used to coding with the help of ChatGPT. But his experience was rather uncommon. Several other young as well as mid-career applicants who spoke to The Straits Times said they had not been told to refrain from using GenAI tools. Mr Benjamin Lee (not his real name), 23, completed his writing assessment for an internship role with the help of ChatGPT. Instructed to plan a mock-up campaign, the second-year communication studies major turned to the AI chatbot for idea generation and he felt the practice was acceptable 'as long as you don't use (the generated response) wholesale'. He said ChatGPT made tailoring his resume to different job descriptions more convenient. He also turned to it for answers on what to wear for an interview and whether to initiate a handshake with the interviewer, among other things. Another graduate, Ms Lim Zi Yi, aced her writing test for a content analyst role in a financial information company with the help of GenAI. When she received her assessment, which was entirely in traditional Chinese characters, she promptly input the text into ChatGPT for translation into English first before working on analysing it. When Ms Lim was praised for her performance during the face-to-face interview following the test, she candidly told the interviewer that she had used GenAI tools. To her surprise, the interviewer commended her ability in working with AI to create a piece of work that retained the human touch, and she was offered the job. AI-friendly hiring Ms Lim's employer is not alone in welcoming candidates' use of GenAI tools during the job application process. AI Singapore, a research institute hosted by the National University of Singapore, allows applicants to its AI Apprenticeship Programme (AIAP) to leverage AI-powered tools during the technical assessment, before they move on to the technical interview, and again during the group case study exercise. 'AI is rapidly becoming an integral component of standard workflows across various roles, including that of an AI engineer,' a spokesperson for AI Singapore said. However, the institute has safeguards in place to ensure fair assessment of the candidates' abilities. For example, its interviewers will pose questions to evaluate their understanding of the work and observe how they use AI tools to identify potential misuse. Mr Josh Lim, principal consultant at Robert Walters Singapore, said his recruitment agency is open to candidates' use of GenAI to help them refine resumes, prepare for interviews and practise for assessments, as long as their applications reflect their genuine experiences, capabilities and communication styles. In terms of assessment format, he said some of his clients prefer candidates to sit the test under a controlled and monitored environment, while others choose to trust the candidates' integrity. Representing The Talent Detective, a local boutique recruitment firm, Ms Sim Yunying said her firm does not actively police candidates' use of AI. 'It's a bit hard to tell people you can use it in this case, but you can't use it there.' 'What is most important is the integrity of the candidate,' she added. 'You can have AI to help you finesse your curriculum vitae, but everything inside should still be accurate and true.' Ms Sim believes that candidates' proficiency at using AI during the application process can reflect how well they can use it on the job. For example, when it comes to marketing campaigns and proposals, 'the human judgment call is still very important' in determining whether the AI-generated idea aligns with the brand, and whether it has the potential to truly stand out, she said. Hence, if the candidates are able to present a very strong concept – even with the help of GenAI – they can still take credit for being able to discern what is truly marketable, she added. Caution around AI use However, not all employers are comfortable with candidates' use of AI tools. Mr Dean Tong, UOB's head of group human resources, said the bank's recruitment process is designed to assess a candidate's critical thinking, communication and problem-solving skills. 'To ensure a fair and accurate evaluation, we do not permit the use of AI tools during job application assessments or interviews,' he said. But, once hired, the employees will receive training on how to use GenAI tools responsibly and effectively, he said. Recognising that AI is still evolving, Mr Tong said the bank does not rule out the possibility of having to adapt its policy in the future. 'We are constantly reviewing and updating our approach, and may in future evaluate candidates on how they use AI to solve problems,' he said. Recruitment agency Randstad also believes in ensuring a fair evaluation based on a candidate's genuine abilities – without AI intervention – so its recruiters can match the right talent with the right job. 'While Randstad champions AI for boosting productivity and we see its growing importance in our daily work, the integrity of the interview and assessment process is crucial,' said Mr David Blasco, its country director in Singapore. 'There is a big difference between using AI as a minor aid, like a spell-checker or highlighting real achievements that are already in the CV, versus using it to create the whole CV based on the job description,' he said. As for assessments, he said candidates are expected to demonstrate their genuine skills – whether they are fresh graduates or experienced professionals. A trained recruiter will be able to spot telltale signs of AI use if candidates are not able to provide enough details while answering follow-up questions. Regardless of the companies' stance, some job applicants remain apprehensive about being transparent over their use of AI. Business major Nathan Foo (not his real name) has been relying on AI heavily to complete both his school assignments and work tasks. 'I can never imagine going back to the days when there was no ChatGPT,' the 24-year-old said, adding that he would feel disadvantaged if he is barred from using the tool when many others use it. Yet, despite acknowledging the growing acceptance of AI use, he – like several other interviewees for this article – requested anonymity, fearing that his dependence on AI might be frowned upon. 'There is an unspoken rule that you shouldn't rely too much on AI,' he said. 'Maybe because ChatGPT was launched only a few years ago, so there is still a bit of taboo.' Source: The Straits Times © SPH Media Limited. Permission required for reproduction. Print

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store