
New training programme launched for young lawyers to stay ahead of AI curve, strengthen basic advocacy and drafting skills
New training programme launched for young lawyers to stay ahead of AI curve, strengthen basic advocacy and drafting skills
Source: Business Times
Article Date: 22 May 2025
Author: Tessa Oh
The Junior Lawyers Professional Certification Programme (JLP) will bring structure to what was previously left to chance, says Singapore Academy of Law CEO Yeong Zee Kin.
In response to technological disruptions in the legal sector, a new training programme will equip young lawyers with skills and knowledge in artificial intelligence (AI), as well as strengthen proficiency in basic advocacy and drafting skills.
Set up by the Singapore Academy of Law (SAL), the Junior Lawyers Professional Certification Programme (JLP) offers structured training for young lawyers in both disputes and corporate practice areas.
Participants, for instance, can take courses on the ethics of generative AI, prompt engineering for lawyers and cross-boarder contract drafting, among others.
The programme was launched on Wednesday (May 21), with an opening conference held at Parkroyal Collection Marina Bay.
It is open to lawyers with up to five years of post-qualification experience. In addition to the mandatory opening conference and masterclass, participants are required to complete 11 more modules within two years in order to obtain certification.
To earn certification in either the disputes or corporate track, lawyers are required to complete at least four modules that are specific to their chosen area of specialisation.
Most of the disputes modules will be led by current or former members of the judiciary as trainers or guest speakers.
In his opening address, SAL chief executive Yeong Zee Kin said the JLP will 'bring structure to what was previously left to chance', ensuring that lawyers learn fundamental legal skills in a holistic way.
Participating in this programme is voluntary but recommended. Lawyers can use SkillsFuture to offset fees, while some law firms have offered to sponsor their associates.
Disruptive shifts
The JLP chiefly seeks to address the disruptive impact of generative AI on the legal sector, Yeong told The Business Times in an interview before the launch.
The widespread accessibility of generative AI tools such as Microsoft Copilot and ChatGPT has made it possible for anyone to generate simple contracts, or even seek advice on litigation strategy – even without formal legal training, noted Yeong.
'Because all these tools are coming on stream and clients have access to them, it means that clients' expectations when they come to see a lawyer is going to be higher,' he added.
In this environment, lawyers need to move beyond basic information gathering to deliver greater value to their clients.
The JLP thus aims to plug this gap, by helping young lawyers keep abreast of AI advancements and strengthening their proficiency in basic legal skills.
Yeong views the programme as a bridge between the Bar exams and the specialist accreditation exams that senior lawyers take when seeking to specialise in a particular field.
Lawyers are required to take modules each year to fulfil continuing professional development (CPD) requirements, but these courses are usually ad hoc in nature, he said.
The JLP, on the other hand, provides a more 'structured way for some of these very fundamental skills and very crucial domain knowledge' for young lawyers.
And since the programme is voluntary, Yeong hopes it attracts serious participants.
'If you want to just take enough courses to fill your CPD requirements, there are a lot of free and cheap courses,' he said. 'This course is not for lawyers with that kind of mentality… it is meant for those who want to learn.'
Addressing attrition
While the JLP reduces attrition among young lawyers by focusing on career support, Yeong recognised that it does not resolve the perennial issue of the high work demands and long hours within the legal profession.
To this, he said SAL has other plans in the works, such as the legal profession symposium in July.
'That's intended to address workplace issues, (such as) the changing expectations between different generations of lawyers, the interactions between juniors and seniors,' he said.
Workplace pressures could also be an obstacle for young lawyers, who have to juggle their personal development with tight work deadlines, to take up training courses.
Acknowledging this, Yeong said it would not be feasible to require all law firms to allow their associates time off to attend the programme.
What SAL has done is to get law firms to sign a training pledge to demonstrate their commitment to supporting the JLP and other training initiatives. Fifty-two legal organisations have signed this pledge.
SAL will work with the firms to ensure that they develop good practices over time, said Yeong.
It will also monitor the programme's sign-up rates to see if its 'message is not getting through', he added. More than half of the 80 slots for the programme have been taken up thus far.
'Law is a knowledge-based profession, so the acquisition of knowledge will never end, because things change, business models change… new areas of law will come out,' said Yeong. 'We need to continue sharpening our skills and learning new things.'
Source: The Business Times © SPH Media Limited. Permission required for reproduction.
Print
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
14 hours ago
- Straits Times
AI killed the maths brain
With the ability to task AI to code, start-ups and tech giants alike are hiring fewer and fewer entry-level computer scientists. PHOTO: REUTERS ChatGPT was released 2½ years ago, and we have been in a public panic ever since. Artificial intelligence (AI) can write in a way that passes for a human, creating fear that relying too heavily on machine-generated text will diminish our ability to read and write at a high level. We've heard that the college essay is dead, and that an alarming number of students use AI tools to cheat their way through college. This has the potential to undermine the future of jobs, education and art all at once. Join ST's Telegram channel and get the latest breaking news delivered to you.
Business Times
a day ago
- Business Times
AI ‘vibe coding' startups burst onto scene with sky-high valuations
[NEW YORK] Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or 'code-gen' startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised US$900 million at a US$10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for US$3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called 'vibe coding,' which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. 'AI has automated all the repetitive, tedious work,' said Scott Wu, CEO of code gen startup Cognition. 'The software engineer's role has already changed dramatically. It's not about memorising esoteric syntax anymore.' BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google, Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over US$500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. Learn to code? As AI revolutionises the industry, many jobs – particularly entry-level coding positions that are more basic and involve repetition – may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24 per cent in 2024, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that 'well over 30 per cent' of Google's code is now AI-generated, and Amazon CEO Andy Jassy said last year the company had saved 'the equivalent of 4,500 developer-years' by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30 per cent of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40 per cent of those being software developers in Microsoft's home state, Washington. 'We're focused on creating AI that empowers developers to be more productive, creative, and save time,' a Microsoft spokesperson said. 'This means some roles will change with the revolution of AI, but human intelligence remains at the centre of the software development life cycle.' Mounting losses Some 'vibe-coding' platforms already boast substantial annualised revenues. Cursor, with just 60 employees, went from zero to US$100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in US$50 million in annualised revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. 'The prices people are paying for coding assistants are going to get more expensive,' Quinn Slack, CEO at coding startup Sourcegraph, told Reuters. To make the higher cost an easier pill to swallow for customers, Sourcegraph is now offering a drop-down menu to let users choose which models they want to work with, from open source models such as DeepSeek to the most advanced reasoning models from Anthropic and OpenAI so they can opt for cheaper models for basic questions. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. 'I haven't seen people working this hard since the first Internet boom,' said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. 'In many cases, it's less about who's got the best technology – it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others,' said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. Custom AI models Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed US$3 billion in annualised revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimised for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than US$600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly US$500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment. REUTERS
Business Times
2 days ago
- Business Times
Ads ruined social media; now they're coming to AI
CHATBOTS might hallucinate and sprinkle too much flattery on their users. 'That's a fascinating question!' one recently told me, but at least the subscription model that underpins them is healthy for our well-being. Many Americans pay about US$20 a month to use the premium versions of OpenAI's ChatGPT, Google's Gemini Pro or Anthropic's Claude, and the result is that the products are designed to provide maximum utility. Don't expect this status quo to last. Subscription revenue has a limit, and Anthropic's new US$200-a-month 'Max' tier suggests even the most popular models are under pressure to find new revenue streams. Unfortunately, the most obvious one is advertising – the web's most successful business model. Artificial intelligence (AI) builders are already exploring ways to plug more ads into their products, and while that's good for their bottom lines, it also means we're about to see a new chapter in the attention economy that fuelled the Internet. If social media's descent into engagement-bait is any guide, the consequences will be profound. One cost is addiction. Young office workers are becoming dependent on AI tools to help them write e-mails and digest long documents, according to a recent study, and OpenAI says a cohort of 'problematic' ChatGPT users are hooked on the tool. Putting ads into ChatGPT, which now has more than 500 million active users, won't spur the company to help those people reduce their use of the product. Quite the opposite. Advertising was the reason companies like Mark Zuckerberg's Meta Platforms designed algorithms to promote engagement, keeping users scrolling so they saw more ads and drove more revenue. It's the reason behind the so-called 'enshittification' of the web, a place now filled with clickbait and social media posts that spark outrage. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Baking such incentives into AI will almost certainly lead its designers to find ways to trigger more dopamine spikes, perhaps by complimenting users even more, asking personal questions to get them talking for longer or even cultivating emotional attachments. Millions of people in the Western world already view chatbots in apps like Chai, Talkie, Replika and Botify as friends or romantic partners. Imagine how persuasive such software could be when its users are beguiled. Imagine a person telling their AI about feeling depressed, and the system recommending some affordable holiday destinations or medication to address the problem. Is that how ads would work in chatbots? The answer is subject to much experimentation, and companies are indeed experimenting. Google's ad network, for instance, recently started putting advertisements in third-party chatbots. Chai, a romance and friendship chatbot, on which users spent 72 minutes a day, on average, in September 2024, serves pop-up ads. And AI answer engine Perplexity displays sponsored questions. After an answer to a question about job hunting, for instance, it might include a list of suggested follow ups including, at the top, 'How can I use Indeed to enhance my job search?' Perplexity's chief executive officer Aravind Srinivas told a podcast in April that the company was looking to go further by building a browser to 'get data even outside the app' to track 'which hotels are you going (to), which restaurants are you going to', to enable what he called 'hyper-personalised' ads. For some apps, that might mean weaving ads directly into conversations, using the intimate details shared by users to predict and potentially even manipulate them into wanting something, then selling those intentions to the highest bidder. Researchers at Cambridge University referred to this as the forthcoming 'intention economy' in a recent paper, with chatbots steering conversations toward a brand or even a direct sale. As evidence, they pointed to a 2023 blog post from OpenAI calling for 'data that expresses human intention' to help train its models, a similar effort from Meta, and Apple's 2024 developer framework that helps apps work with Siri to 'predict actions someone might take in the future'. As for OpenAI's Sam Altman, nothing says 'we're building an ad business' like hiring the person who built delivery app Instacart into an advertising powerhouse. Altman recently poached CEO Fidji Simo to help OpenAI 'scale as we enter a next phase of growth'. In Silicon Valley parlance, to 'scale' often means to quickly expand your user base by offering a service for free, with ads. Tech companies will inevitably claim that advertising is a necessary part of democratising AI. But we've seen how 'free' services cost people their privacy and autonomy – even their mental health. And AI knows more about us than Google or Facebook ever did – details about our health concerns, relationship issues and work. In two years, they have also built a reputation as trustworthy companions and arbiters of truth. On X, for instance, users frequently bring AI models Grok and Perplexity into conversations to flag if a post is fake. When people trust AI that much, they're more vulnerable to targeted manipulation. AI advertising should be regulated before it becomes too entrenched, or we'll repeat the mistakes made with social media – scrutinising the fallout of a lucrative business model only after the damage is done. BLOOMBERG