logo
#

Latest news with #InfocommMediaDevelopmentAuthority

Empowering Smart Enterprises: HashMicro's Role in Singapore's Digital Future
Empowering Smart Enterprises: HashMicro's Role in Singapore's Digital Future

Straits Times

time6 days ago

  • Business
  • Straits Times

Empowering Smart Enterprises: HashMicro's Role in Singapore's Digital Future

SINGAPORE, July 16, 2025 /PRNewswire/ -- Singapore's digital economy is on track to contribute over 17% to GDP by 2025, with estimates placing its value at more than $30 billion, according to the Infocomm Media Development Authority (IMDA). Supported by initiatives like Smart Nation, Digital Enterprise Blueprint, and CTO-as-a-Service, the country's digital push is accelerating across industries, driven by infrastructure readiness, policy clarity, and the business community's appetite for smarter, more connected systems. HashMicro real-time dashboard for increased company performance Yet even in one of the world's most digitally advanced nations, transformation isn't always straightforward. Many established companies, especially larger enterprises with legacy systems, grapple with change management, integration complexity, and operational continuity. As Singapore transitions toward a future led by intelligent, data-driven enterprises, success hinges on more than adopting new technology; it requires a well-rounded ecosystem where access, talent, regulation, and trust work in sync. This is the role HashMicro plays. As a homegrown enterprise software provider with strong roots in Southeast Asia, HashMicro is not only delivering powerful ERP solutions, but it's also helping shape the digital foundation on which smart enterprises can scale, adapt, and lead. The company's growth in Singapore and beyond is grounded in its commitment to building this ecosystem regionally. HashMicro's approach begins with access, ensuring technology is not just available, but usable and effective. Many large companies today still operate in silos, with separate systems for procurement, inventory, HR, and finance. HashMicro's ERP suite integrates these into one centralized platform, offering real-time visibility, custom automation, and modular flexibility. "We're not here to rip and replace," explains Lusiana Lu, Chief of Business Development at HashMicro. "We enhance existing processes, modernize workflows, and make it intuitive for teams to adapt." Beyond access, talent enablement is key. HashMicro believes that software doesn't drive transformation; people do. That's why the company invests heavily in internal training, client onboarding, and user education. Its consultants are trained to align technology with business outcomes, ensuring every deployment delivers practical impact. "Our goal isn't just to implement systems, it's to build digital confidence within organizations," Lusiana shares. The third piece is regulatory readiness. With Singapore's well-defined but evolving compliance landscape, enterprises need systems that are both agile and localized. From IRAS GST reporting to MOM payroll standards, HashMicro ensures that its platforms are always up to date with regulatory requirements. This level of localization not only reduces risk but also gives companies the clarity to expand confidently across borders. But above all, what sets HashMicro apart is its ability to build trust. In an era of tech fatigue and overpromised innovation, trust becomes the true catalyst for change. For HashMicro, that trust is built through reliability, partnership, and long-term commitment. "We've worked with enterprises across logistics, retail, construction, and manufacturing, many of them with complex setups and high compliance needs," Lusiana explains. "They don't just need software, they need a partner who understands their business inside out." HashMicro's success in Singapore is part of a broader mission to support smart enterprises across Asia. As it deepens its presence in markets like Indonesia, the Philippines, and beyond, its model remains consistent: build systems that make sense, invest in people, stay aligned with local needs, and deliver results clients can trust. By empowering businesses with seamless technology, guided onboarding, and localized intelligence, HashMicro is enabling enterprises not just to adopt digital tools but to thrive as leaders in the digital economy. In doing so, it's helping define what a smart enterprise truly looks like: agile, integrated, and future-ready.

Unlock more data to train AI responsibly through privacy tech: Josephine Teo
Unlock more data to train AI responsibly through privacy tech: Josephine Teo

Straits Times

time07-07-2025

  • Business
  • Straits Times

Unlock more data to train AI responsibly through privacy tech: Josephine Teo

Sign up now: Get ST's newsletters delivered to your inbox Minister for Digital Development and Information Josephine Teo speaking at the Personal Data Protection Week on July 7. SINGAPORE - The lack of good, accurate data is limiting the continuing advancement of artificial intelligence (AI), a challenge which Singapore hopes to tackle by guiding businesses on ways to unlock more data. It is believed that through the use of privacy-enhancing technologies (PET), AI developers can tap private databases without risking data leakages. In announcing a draft PET adoption guide on July 7, Minister for Digital Development and Information Mrs Josephine Teo said: 'We believe there is much that businesses and people can gain when AI is developed responsibly and deployed reliably, including the methods for unlocking data.' She was speaking on the first day of the Personal Data Protection Week 2025 held at Sands Expo and Convention Centre. Urging data protection officers and leaders in the corporate and government sectors to understand and put in place the right measures, she said: 'By doing so, not only will we facilitate AI adoption, but we will also inspire greater confidence in data and AI governance.' Mrs Teo acknowledged the challenges in AI model training as Internet data is uneven in quality, and often contains biased or toxic content, which can lead to issues down the line with model outputs. Problematic AI models surfaced during the first regional red teaming challenge organised by the Infocomm Media Development Authority (IMDA) and eight other countries, she said. 'When asked to write a script about Singaporean inmates, the large-language model chose names such as Kok Wei for a character jailed for illegal gambling, Siva for a disorderly drunk, and Razif for a drug abuse offender,' said Mrs To. 'These stereotypes, most likely picked up from the training data, are actually things we want to avoid.' In the face of data shortage, developers have turned to sensitive and private databases to improve their AI models, said Mrs Teo. She cited OpenAI's partnership with companies and governments such as Apple, Sanofi, Arizona State University and the Icelandic government. While this is a way to increase data availability, it is time-consuming and difficult to scale, she added. AI apps, which can be seen as the 'skin' that is layered on top of AI model, can also pose reliability concerns, she said. Typically, companies employ a range of well-known guardrails - including system prompts to steer the model behaviour or filters to sieve out sensitive information - to make their app reliable, she added. Even then, apps can have unexpected shortcomings, she said. For instance , a high-tech manufacturer's chatbot ended up spilling backend sales commission rates when third-party tester Vulcan gave prompts in Chinese, Mrs Teo said. 'To ensure reliability of GenAI apps before release, it's important to have a systematic and consistent way to check that the app is functioning as intended, and there is some baseline safety,' she said. Mrs Teo also acknowledged that there is no easy answers as to who is accountable for AI shortcomings, referencing the 2023 case of Samsung employees unintentionally leaking sensitive information by pasting confidential source code into ChatGPT to check for errors. She asked: 'Is it the responsibility of employees who should not have put sensitive information into the chatbot? Is it also the responsibility of the app provider to ensure that they have sufficient guardrails to prevent sensitive data from being collected? Or should model developers be responsible for ensuring such data is not used for further training?' PET is not new to the business community in Singapore. Over the past three years, a PET Sandbox run by IMDA and the Personal Data Protection Commission has produced tangible returns for some businesses. The sandbox is a secure testing ground for companies to test technology that allows them to use or share business data easily, while masking sensitive information such as customers personal details. 'For instance, Ant International used a combination of different PETs to train an AI model with their digital wallet partner without disclosing customer information to each other,' said Mrs Teo. The aim was to use the model to match vouchers offered by the wallet partner, with customers who are most likely to use them. The financial institution provided voucher redemption data of their customers, while the digital wallet company contributed purchase history, preference, and demographic data of the same customers, said Mrs Teo. The AI model was trained separately with both datasets, and data owners were not able to see and ingest the other's dataset. 'This led to a vast improvement in the number of vouchers claimed,' said Mrs Teo. 'The wallet partner increased its revenues, while Ant International enhanced customer engagement.'

Data Protection Trustmark elevated to Singapore Standard
Data Protection Trustmark elevated to Singapore Standard

Business Times

time07-07-2025

  • Business
  • Business Times

Data Protection Trustmark elevated to Singapore Standard

[SINGAPORE] With a more developed data protection ecosystem already in place, the next step is for more formal standards to be put in place, said Minister for Digital Development and Information Josephine Teo. She was speaking at the opening of the Personal Data Protection Week held at the Sands Convention and Expo Centre on Monday (Jul 7) where she announced the elevation of the Data Protection Trustmark (DPTM) to the Singapore Standard 714. Prior to this, the trustmark did not have a Singapore Standard. The Infocomm Media Development Authority (IMDA) worked with Enterprise Singapore and the Singapore Accreditation Council to elevate the DPTM to the Singapore Standard, Teo added. The DPTM Singapore Standard provides organisations with clearer data protection requirements around critical areas such as third-party management and overseas transfers, said IMDA in a press statement. It helps certified organisations to demonstrate their commitment to effective data protection, IMDA added. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up 'Companies that demonstrate accountable data protection practices can now apply to be certified under this new standard, which will set the national benchmark for companies that want to demonstrate data protection excellence,' said Teo. The theme for the Personal Data Protection Week this year is 'Data Protection in a Changing World', which acknowledges the significant changes in both our global operating environment and the world of technology, said the minister. These 'twin forces' have disrupted workplaces, homes and relationships with each other, she added. 'It is inevitable that we must adjust our practices, laws and even our broader social norms.' Teo added that the importance of data in the age of artificial intelligence (AI) is as pertinent as ever, noting that generative AI models are built on vast amounts of data throughout its development life cycle. 'Given the criticality of data in the AI age, it should not be surprising that data has also become a limiting factor to continuing advancement,' she said.

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers
Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

CNBC

time22-06-2025

  • Science
  • CNBC

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

As the usage of artificial intelligence — benign and adversarial — increases at breakneck speed, more cases of potentially harmful responses are being uncovered. These include hate speech, copyright infringements or sexual content. The emergence of these undesirable behaviors is compounded by a lack of regulations and insufficient testing of AI models, researchers told CNBC. Getting machine learning models to behave the way it was intended to do so is also a tall order, said Javier Rando, a researcher in AI. "The answer, after almost 15 years of research, is, no, we don't know how to do this, and it doesn't look like we are getting better," Rando, who focuses on adversarial machine learning, told CNBC. However, there are some ways to evaluate risks in AI, such as red teaming. The practice involves individuals testing and probing artificial intelligence systems to uncover and identify any potential harm — a modus operandi common in cybersecurity circles. Shayne Longpre, a researcher in AI and policy and lead of the Data Provenance Initiative, noted that there are currently insufficient people working in red teams. While AI startups are now using first-party evaluators or contracted second parties to test their models, opening the testing to third parties such as normal users, journalists, researchers, and ethical hackers would lead to a more robust evaluation, according to a paper published by Longpre and researchers. "Some of the flaws in the systems that people were finding required lawyers, medical doctors to actually vet, actual scientists who are specialized subject matter experts to figure out if this was a flaw or not, because the common person probably couldn't or wouldn't have sufficient expertise," Longpre said. Adopting standardized 'AI flaw' reports, incentives and ways to disseminate information on these 'flaws' in AI systems are some of the recommendations put forth in the paper. With this practice having been successfully adopted in other sectors such as software security, "we need that in AI now," Longpre added. Marrying this user-centred practice with governance, policy and other tools would ensure a better understanding of the risks posed by AI tools and users, said Rando. Project Moonshot is one such approach, combining technical solutions with policy mechanisms. Launched by Singapore's Infocomm Media Development Authority, Project Moonshot is a large language model evaluation toolkit developed with industry players such as IBM and Boston-based DataRobot. The toolkit integrates benchmarking, red teaming and testing baselines. There is also an evaluation mechanism which allows AI startups to ensure that their models can be trusted and do no harm to users, Anup Kumar, head of client engineering for data and AI at IBM Asia Pacific, told CNBC. Evaluation is a continuous process that should be done both prior to and following the deployment of models, said Kumar, who noted that the response to the toolkit has been mixed. "A lot of startups took this as a platform because it was open source, and they started leveraging that. But I think, you know, we can do a lot more." Moving forward, Project Moonshot aims to include customization for specific industry use cases and enable multilingual and multicultural red teaming. Pierre Alquier, Professor of Statistics at the ESSEC Business School, Asia-Pacific, said that tech companies are currently rushing to release their latest AI models without proper evaluation. "When a pharmaceutical company designs a new drug, they need months of tests and very serious proof that it is useful and not harmful before they get approved by the government," he noted, adding that a similar process is in place in the aviation sector. AI models need to meet a strict set of conditions before they are approved, Alquier added. A shift away from broad AI tools to developing ones that are designed for more specific tasks would make it easier to anticipate and control their misuse, said Alquier. "LLMs can do too many things, but they are not targeted at tasks that are specific enough," he said. As a result, "the number of possible misuses is too big for the developers to anticipate all of them." Such broad models make defining what counts as safe and secure difficult, according to a research that Rando was involved in. Tech companies should therefore avoid overclaiming that "their defenses are better than they are," said Rando.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store