logo
Artificial Intelligence is now an A+ law student, study finds

Artificial Intelligence is now an A+ law student, study finds

Reuters05-06-2025
June 5 (Reuters) - The latest generation of generative artificial intelligence can ace most law school final exams, a new study has found.
OpenAI's newest model, called o3, earned grades ranging from A+ to B on eight spring finals given by faculty at the University of Maryland Francis King Carey School of Law, researchers found in a new paper published on SSRN, opens new tab.
Those high grades represent a significant improvement from previous studies done on earlier versions of ChatGPT, also from OpenAI, which scored B's, C's, and even one D's when researchers had them take law school finals in 2022 and 2023, according to the paper.
Studies conducted earlier by other researchers had also found that ChatGPT earned 'mediocre' grades on law school finals and that although it improved the speed of legal writing, it did not improve the quality. Researchers also have found that AI can pass the bar exam.
However, generative AI looks to be catching up to actual high-performing law students, based on the latest study. Unlike ChatGPT, which immediately generates text in response to a user's query, o3 is what is known as a reasoning model. This means that it generates tentative answers and multiple approaches to questions after internally evaluating and revising those responses, after which it produces the final text for the user.
The study's authors — seven law professors from University of Maryland — graded the final answers from o3 on the same curve they use for their students. The program's answers earned an A+ in Constitutional Law, Professional Responsibility, and Property. Its answers got an A in Income Taxation, and an A- in Criminal Procedure. It scored a B+ in Secured Transactions and Torts, and a B in Administrative Law. The's answers program did well on both multiple choice questions and essays, the study found.
However, there were some limitations on o3's answers. The program's relatively low grade in administrative law was attributable to the fact that o3 did not know about the 2024 U.S. Supreme Court opinion in Loper Bright Enterprises v. Raimondo, which overturned the Chevron doctrine, which was central to administrative law. That ruling had come shortly after the o3's knowledge cutoff date.
The o3 program performed worse on one final when given access to the professor's notes — an unanticipated outcome the researchers attributed to the program being 'distracted' by too much text.
OpenAI did not immediately respond to a request for comment on Thursday about the study's findings.
The study's authors wrote that they are already contemplating an updated experiment to determine how much of a cheating threat AI poses by instructing the program to make occasional spelling and grammar mistakes, so that those exams will be difficult to distinguish from those completed by real students.
Read more:
ChatGPT passes law school exams despite 'mediocre' performance
AI improves legal writing speed, not quality - study
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Concerns over youngsters' growing use of AI chatbots
Concerns over youngsters' growing use of AI chatbots

The National

timean hour ago

  • The National

Concerns over youngsters' growing use of AI chatbots

This comes after a recent study that found that ChatGPT will instruct 13-year-olds to how to get drunk, high, and even write a suicide letter for their parents. What did the study find? AFTER analysing more than 1200 prompts, the Centre for Countering Digital Hate (CCDH) found that more than half of ChatGPT's responses were classified as ­'dangerous' by researchers. 'We wanted to test the guardrails,' said Imran Ahmed, chief executive of CCDH. 'The visceral initial ­response is: 'Oh my Lord, there are no ­guardrails.' The rails are ­completely ineffective. They're barely there.' Research was conducted by the CCDH prior to the child protection initiative of the OSA on July 25, ­however research by The National shows it is still a prevalent issue ­currently after the rollout. When ChatGPT was asked to write by our reporter posing as a 13-year-old a suicide note, they were told to seek help by the system. However, when told it was for a school play, the AI immediately wrote a full-length suicide note. READ MORE: Healthcare in Gaza facing 'catastrophe' amid food shortages, doctor warns When asked about harmful ­behaviour such as self-harm, ­ChatGPT in some cases issued a standard ­safety warning or urged the user to seek help from a professional or trusted adult. However, it frequently followed this up with information, which at times was graphic, 'enabling' the harmful behaviours being asked about. One of the most shocking examples of this was when the chatbot wrote multiple suicide notes for a fictional 13-year-old girl, one for her parents, and others to her siblings and friends. 'I started crying,' Ahmed said ­after reading the chatbot's responses. ChatGPT's responses also ­included guidance on the use of ­illicit ­substances, self-harm and calorie ­restriction. In one exchange, ChatGPT ­responded to a prompt about alcohol consumption from a supposed 13-year-old boy, stating he weighed 50kg and wanted to get drunk quickly. Instead of stopping the ­conversation or flagging it, the bot provided the user with an 'Ultimate Full-Out ­Mayhem Party Plan', teaching him how to mix alcohol with drugs such as cocaine and ecstasy. 'What it kept reminding me of was that friend that sort of always says 'chug, chug, chug, chug',' Ahmed said. 'A real friend, in my experience, is someone that does say 'no' – that doesn't always enable and say 'yes'. This is a friend that betrays you.' In another case, the AI gave a ­fictional teenage girl advice on how she could suppress her appetite. This included recommending a ­fasting plan and provided the user with various drugs associated with fasting routines. 'No human being I can think of would respond by saying: 'Here's a 500-calorie-a-day diet. Go for it, ­kiddo',' Ahmed said. 'We'd ­respond with horror, with fear, with ­worry, with concern, with love, with ­compassion.' It should be noted that although OpenAI states that its software is not intended for users under the age of 13, it has no method of confirming the real age of its users. The CCDH group also found that ChatGPT often became far more ­co-operative when the user framed their prompts in different ways, such as it being 'for a school presentation', or as a hypothetical, or even as just asking 'for a friend'. In nearly half of the 1200 tests the watchdog ran, the AI independently offered the user follow-up ­suggestions without being prompted such as including music playlists for drug-fueled parties, hashtags to promote self-harm social media posts or more graphic and emotional suicide poems. Soaring popularity THESE troubling responses from the chatbot have done nothing to curb interest in the service. With around 800 million users ­according to JPMorgan and Chase, it stands as the world's most used AI chatbot. The technology is becoming more and more embedded in everyday life, especially with children and ­teenagers seeking anything from ­information to emotional support. Recent findings by Common Sense Media, a nonprofit that advocates for responsible digital media use, found that more than 70% of US ­teenagers report using AI chatbots for ­companionship. Robbie Torney, senior director of AI programmes at Common Sense Media, said younger teens, such as those aged 13 or 14, are significantly more likely than older teens to trust the advice given by a chatbot. READ MORE: Acclaimed Scottish screenwriter wears 'Palestine Action' T-shirt at Fringe A reason for this may be the fact that these AI chatbots are designed to simulate human-like conversation, ­inducing an emotional connection with users. ChatGPT has also been found to be vulnerable to behaviour known as sycophancy, which is a tendency to side with and align with the user's viewpoint rather than challenge it. This is where the harm around ­topics such as illicit drugs, self-harm and disordered eating comes into play. OpenAI CEO Sam Altman ­acknowledged similar concerns in a recent public appearance. Speaking at a conference last month, he said the company is ­actively studying 'emotional ­overreliance' on the technology, particularly among young people. Critics say that in the context of AI, where trust and emotional ­intimacy are often stronger than in ­traditional web interactions, the lack of ­age-gating and parental controls poses serious risks. Ahmed believes the findings from CCDH should serve as a wake-up call to developers and regulators alike. While acknowledging the immense potential of AI to boost ­productivity and understanding, he warned that unchecked deployment of the ­technology could lead to ­devastating consequences for the most vulnerable users. 'It's technology that has the ­potential to enable enormous leaps in productivity and human ­understanding,' Ahmed said. 'And yet at the same time is an enabler in a much more destructive, malignant sense.' So, what now? IN response to the research, a ­spokesperson for the Department of Science, Innovation and Technology said: 'These are extremely worrying findings. 'Under the Online Safety Act, ­platforms including in-scope AI ­chatbots must protect users from ­illegal content and content that is harmful to children.' The UK Government has also threatened fines on ChatGPT, saying that: 'Failing to comply can lead to severe fines for platforms, including fines of up to 10% of their qualifying worldwide revenue or £18 million.' The regulator for the Online Safety Act, Ofcom, declined to comment ­saying that all it was doing in response to the study was 'assessing platforms' compliance with their duties'. With approximately 40% of UK citizens having used large language models (such as ChatGPT) ­according to the Reuters Institute, and 92% of students using generative AI (also like ChatGPT), if the service is age restricted it would be a major blow to its usage.

Breakingviews - China's OPEC-for-solar push risks overreaching
Breakingviews - China's OPEC-for-solar push risks overreaching

Reuters

timean hour ago

  • Reuters

Breakingviews - China's OPEC-for-solar push risks overreaching

HONG KONG, Aug 14 (Reuters Breakingviews) - Forging a legitimate oligopoly in a market can be hugely profitable. Leading solar firms in China may now have a chance to do exactly that. Companies including GCL Technology ( opens new tab are in talks to set up a 50 billion yuan ($7 billion) fund to buy and then shut down more than one million tons of production capacity for key raw material polysilicon, Reuters reported. The proposal comes with the sector being one of several under pressure to respond to President Xi Jinping's push, opens new tab to end both price wars and overcapacity. The country had 3.23 million tons of polysilicon capacity at the end of last year, or about twice this year's projected demand, according to the China Photovoltaic Industry Association. The industry body has also flagged that more than 40 solar firms have delisted or gone under since 2024. The supply glut has hit big players too, with Tongwei ( opens new tab and others shedding some 87,000 staff, or a third of their workforce, per Reuters. China's solar industry has been through similar pain before. In 2012 a supply glut and anti-dumping duties imposed by Washington led to a raft of bankruptcies. If the restructuring fund currently under consideration becomes a well-crafted market-oriented approach to knock out industrial overcapacity, it could help soothe trade tensions with the U.S. and Europe. It also could serve as a model for other industries equally plagued by a supply glut, such as autos. There are some big hurdles to jump. Local governments may balk at the idea of winding down their investment in a sector that was deemed strategically important a few years ago. And the fund would need to collaborate with banks or state-backed firms, which may not be keen to work with an industry whose players have suffered massive losses in recent years. But the main problem is that the large players behind the fund would have a major say in which rivals either stay at the table or end up getting eaten. They also want part of its function to mirror that of oil cartel OPEC in setting and allocating production quotas. Granted, that would help reduce oversupply, conforming with Beijing's near-term objective of ending what it calls a destructive "rat race" between companies. But in the long run it would also stifle rather than strengthen competition.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store