logo
AI oversight needed to ensure fairness, accountability, and inclusivity, says Lee Lam Thye

AI oversight needed to ensure fairness, accountability, and inclusivity, says Lee Lam Thye

KUALA LUMPUR: The Alliance for a Safe Community has called for clear, forward-looking regulations and a comprehensive ethical framework to ensure artificial intelligence (AI) development prioritises fairness, accountability, and inclusivity.
"This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its benefits accessible to all, not just a select few," said chairman Tan Sri Lee Lam Thye in a statement today.
The group proposed a regulatory framework including AI accountability laws, transparency and explainability for AI decision-making that impacts individuals, strengthened data protection and privacy standards, risk assessment and certification requirements, and the creation of public oversight bodies.
The group also proposed the establishment of a Code of Ethics that is human-centric, non-discriminatory, fair, honest, environmentally responsible, collaborative, and inclusive.
He warned that while AI holds promise for healthcare innovations and environmental sustainability, its use must always serve the greater good.
Key risks include privacy breaches, algorithmic bias, job displacement, and the spread of misinformation, Lee added.
"We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity," Lee added.
The group concluded with a warning against a future where technology dictates the terms of our humanity, and called for a path where AI amplifies best qualities for the benefit of all.
On Wednesday, Prime Minister Datuk Seri Anwar Ibrahim said the government plans to push for a new legislation aimed at reinterpreting sovereignty in light of the rapid growth of artificial intelligence (AI) and cloud-based technologies.
Anwar added that following the evolving role of governance in the digital era, the traditional notions of sovereignty, designed for a pre-digital world, must be reconsidered to accommodate new technological realities.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI ‘vibe coding' startups burst onto scene with sky-high valuations
AI ‘vibe coding' startups burst onto scene with sky-high valuations

The Star

time2 hours ago

  • The Star

AI ‘vibe coding' startups burst onto scene with sky-high valuations

NEW YORK, NY (Reuters) -Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or 'code-gen' startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called 'vibe coding,' which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. 'AI has automated all the repetitive, tedious work,' said Scott Wu, CEO of code gen startup Cognition. 'The software engineer's role has already changed dramatically. It's not about memorizing esoteric syntax anymore.' Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google, Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over $500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. LEARN TO CODE? As AI revolutionizes the industry, many jobs - particularly entry-level coding positions that are more basic and involve repetition - may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24% in 2024, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that 'well over 30%' of Google's code is now AI-generated, and Amazon CEO Andy Jassy said last year the company had saved 'the equivalent of 4,500 developer-years' by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30% of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40% of those being software developers in Microsoft's home state, Washington. 'We're focused on creating AI that empowers developers to be more productive, creative, and save time,' a Microsoft spokesperson said. 'This means some roles will change with the revolution of AI, but human intelligence remains at the center of the software development life cycle.' MOUNTING LOSSES Some 'vibe-coding' platforms already boast substantial annualized revenues. Cursor, with just 60 employees, went from zero to $100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in $50 million in annualized revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. 'The prices people are paying for coding assistants are going to get more expensive,' Quinn Slack, CEO at coding startup Sourcegraph, told Reuters. To make the higher cost an easier pill to swallow for customers, Sourcegraph is now offering a drop-down menu to let users choose which models they want to work with, from open source models such as DeepSeek to the most advanced reasoning models from Anthropic and OpenAI so they can opt for cheaper models for basic questions. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. 'I haven't seen people working this hard since the first Internet boom,' said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. 'In many cases, it's less about who's got the best technology -- it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others,' said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. CUSTOM AI MODELS Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed $3 billion in annualized revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimized for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than $600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly $500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment. (Reporting by Anna Tong and Krystal Hu in New York. Editing by Kenneth Li and Michael Learmonth)

AI regulatory framework report expected by end-June, says Gobind
AI regulatory framework report expected by end-June, says Gobind

Free Malaysia Today

time4 hours ago

  • Free Malaysia Today

AI regulatory framework report expected by end-June, says Gobind

Digital minister Gobind Singh Deo said the government's approach to AI would prioritise strong governance and public trust in digital technologies. KUALA LUMPUR : A full report outlining Malaysia's proposed regulatory framework for artificial intelligence (AI) is expected to be completed by the end of June, according to digital minister Gobind Singh Deo. The report, currently being finalised by the National Artificial Intelligence Office (NAIO), established last year under the digital ministry, will form the basis for how the country approaches AI regulation, whether through legislation, new rules or the adoption of common standards. 'Discussions with industry stakeholders are ongoing, and several views have already been presented,' he told reporters at the launch of the cybersecurity Professional Capability Development Programme. 'I hope that by the end of June, we will have a report from NAIO that can help chart an appropriate course for AI governance in Malaysia.' Also present at the event were digital ministry secretary-general Fabian Bigar, CyberSecurity Malaysia CEO Amirudin Abdul Wahab, and Sanjay Bavisi, president of EC-Council, a company involved in cybersecurity consultancy and training. Gobind said the government's approach to AI would prioritise strong governance and public trust in digital technologies. 'Amid this digital transformation, risks will inevitably arise. We must carefully consider how best to ensure public trust in digital platforms,' he said. He added that any regulatory model must take into account the specific risks and characteristics of each sector affected by AI, given the technology's wide-ranging impact on all industries.

Google Deepmind CEO says global AI cooperation 'difficult'
Google Deepmind CEO says global AI cooperation 'difficult'

The Star

time4 hours ago

  • The Star

Google Deepmind CEO says global AI cooperation 'difficult'

Hassabis advocated for the implementation of 'smart, adaptable regulation' because 'it needs to kind of adapt to where the technology ends up going and what the problems end up being'. — Bloomberg LONDON: Artificial intelligence pioneer and head of Google's Deepmind, Demis Hassabis on June 2 said that greater international cooperation around AI regulation was needed but "difficult" to achieve "in today's geopolitical context". At a time when AI is being integrated across all industries, its uses have raised major ethical questions, from the spread of misinformation to its impact on employment, or the loss of technological control. At London's South by Southwest (SXSW) festival on Monday, Hassabis, who has won a Nobel Prize in Chemistry for his research on AI, also addressed the challenges that artificial general intelligence (AGI) – a technology that could match and even surpass human capability – would bring. "The most important thing is it's got to be some form of international cooperation because the technology is across all borders. It's going to get applied to all countries," Hassabis said. "Many, many countries are involved in researching or building data centres or hosting these technologies. So I think for anything to be meaningful, there has to be some sort of international cooperation or collaboration and unfortunately that's looking quite difficult in today's geopolitical context," he said. At Paris's AI summit in February, 58 countries – including China, France, India, the European Union and the African Union Commission – called for enhanced coordination on AI governance. But the US warned against "excessive regulation", with US Vice President JD Vance saying it could "kill a transformative sector". Alongside the US, the UK refused to sign the summit's appeal for an "open", "inclusive" and "ethical" AI. Hassabis on Monday advocated for the implementation of "smart, adaptable regulation" because "it needs to kind of adapt to where the technology ends up going and what the problems end up being". – AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store