Study accuses LM Arena of helping top AI labs game its benchmark
A new paper from AI lab Cohere, Stanford, MIT, and Ai2 accuses LM Arena, the organization behind the popular crowdsourced AI benchmark Chatbot Arena, of helping a select group of AI companies achieve better leaderboard scores at the expense of rivals.
According to the authors, LM Arena allowed some industry-leading AI companies like Meta, OpenAI, Google, and Amazon to privately test several variants of AI models, then not publish the scores of the lowest performers. This made it easier for these companies to achieve a top spot on the platform's leaderboard, though the opportunity was not afforded to every firm, the authors say.
"Only a handful of [companies] were told that this private testing was available, and the amount of private testing that some [companies] received is just so much more than others," said Cohere's VP of AI research and co-author of the study, Sara Hooker, in an interview with TechCrunch. "This is gamification."
Created in 2023 as an academic research project out of UC Berkeley, Chatbot Arena has become a go-to benchmark for AI companies. It works by putting answers from two different AI models side-by-side in a "battle," and asking users to choose the best one. It's not uncommon to see unreleased models competing in the arena under a pseudonym.
Votes over time contribute to a model's score — and, consequently, its placement on the Chatbot Arena leaderboard. While many commercial actors participate in Chatbot Arena, LM Arena has long maintained that its benchmark is an impartial and fair one.
However, that's not what the paper's authors say they uncovered.
One AI company, Meta, was able to privately test 27 model variants on Chatbot Arena between January and March leading up to the tech giant's Llama 4 release, the authors allege. At launch, Meta only publicly revealed the score of a single model — a model that happened to rank near the top of the Chatbot Arena leaderboard.
In an email to TechCrunch, LM Arena Co-Founder and UC Berkeley Professor Ion Stoica said that the study was full of "inaccuracies" and "questionable analysis."
"We are committed to fair, community-driven evaluations, and invite all model providers to submit more models for testing and to improve their performance on human preference," said LM Arena in a statement provided to TechCrunch. "If a model provider chooses to submit more tests than another model provider, this does not mean the second model provider is treated unfairly."
Armand Joulin, a principal researcher at Google DeepMind, also noted in a post on X that some of the study's numbers were inaccurate, claiming Google only sent one Gemma 3 AI model to LM Arena for pre-release testing. Hooker responded to Joulin on X, promising the authors would make a correction.
The paper's authors started conducting their research in November 2024 after learning that some AI companies were possibly being given preferential access to Chatbot Arena. In total, they measured more than 2.8 million Chatbot Arena battles over a five-month stretch.
The authors say they found evidence that LM Arena allowed certain AI companies, including Meta, OpenAI, and Google, to collect more data from Chatbot Arena by having their models appear in a higher number of model "battles." This increased sampling rate gave these companies an unfair advantage, the authors allege.
Using additional data from LM Arena could improve a model's performance on Arena Hard, another benchmark LM Arena maintains, by 112%. However, LM Arena said in a post on X that Arena Hard performance does not directly correlate to Chatbot Arena performance.
Hooker said it's unclear how certain AI companies might've received priority access, but that it's incumbent on LM Arena to increase its transparency regardless.
In a post on X, LM Arena said that several of the claims in the paper don't reflect reality. The organization pointed to a blog post it published earlier this week indicating that models from non-major labs appear in more Chatbot Arena battles than the study suggests.
One important limitation of the study is that it relied on "self-identification" to determine which AI models were in private testing on Chatbot Arena. The authors prompted AI models several times about their company of origin, and relied on the models' answers to classify them — a method that isn't foolproof.
However, Hooker said that when the authors reached out to LM Arena to share their preliminary findings, the organization didn't dispute them.
TechCrunch reached out to Meta, Google, OpenAI, and Amazon — all of which were mentioned in the study — for comment. None immediately responded.
In the paper, the authors call on LM Arena to implement a number of changes aimed at making Chatbot Arena more "fair." For example, the authors say, LM Arena could set a clear and transparent limit on the number of private tests AI labs can conduct, and publicly disclose scores from these tests.
In a post on X, LM Arena rejected these suggestions, claiming it has published information on pre-release testing since March 2024. The benchmarking organization also said it "makes no sense to show scores for pre-release models which are not publicly available," because the AI community cannot test the models for themselves.
The researchers also say LM Arena could adjust Chatbot Arena's sampling rate to ensure that all models in the arena appear in the same number of battles. LM Arena has been receptive to this recommendation publicly, and indicated that it'll create a new sampling algorithm.
The paper comes weeks after Meta was caught gaming benchmarks in Chatbot Arena around the launch of its above-mentioned Llama 4 models. Meta optimized one of the Llama 4 models for 'conversationality,' which helped it achieve an impressive score on Chatbot Arena's leaderboard. But the company never released the optimized model — and the vanilla version ended up performing much worse on Chatbot Arena.
At the time, LM Arena said Meta should have been more transparent in its approach to benchmarking.
Earlier this month, LM Arena announced it was launching a company, with plans to raise capital from investors. The study increases scrutiny on private benchmark organization's — and whether they can be trusted to assess AI models without corporate influence clouding the process.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
2 hours ago
- Axios
Scoop: Apple to appeal EU requirement to share info with tech rivals
Apple filed an appeal to the European Commission's specifications around how the company complies with its interoperability requirements, which compel it to share user information with outside developers as part of the Digital Markets Act, per a source familiar with the situation. Why it matters: Apple continues to push back forcefully against requirements imposed by the DMA, which the company has said forces it to give up its intellectual property and compromise user privacy. The deadline to appeal this particular requirement was May 30. How it works: The interoperability requirements are meant to give other device manufacturers and app developers access to Apple features that are normally exclusive to Apple products, including WiFi pairing and notifications on non-Apple smartwatches and headsets. Apple and Meta were the first companies to be hit with DMA fines in April, as Axios first reported. Apple put out a report last December arguing that the DMA's interoperability reports could be abused and expose private user information. What they're saying: "We design our technology to work seamlessly together, so it can deliver the unique experience our users love and expect from our products. The EU's interoperability requirements threaten that foundation, while creating a process that is unreasonable, costly, and stifles innovation," an Apple spokesperson told Axios. "These requirements will also hand data-hungry companies sensitive information, which poses massive privacy and security risks to our EU users." Per the Apple spokesperson, as allowed under the DMA's interoperability requirement, companies including Meta, Google, Garmin and Spotify have requested from Apple user notification content and stored WiFi networks, "giving them the ability to access personal information that even Apple doesn't see." The spokesperson added, "In the end, these deeply flawed rules that only target Apple—and no other company—will severely limit our ability to deliver innovative products and features to Europe, leading to an inferior user experience for our European customers." The other side: Companies that rely on the App Store to distribute their products have welcomed the DMA's requirements of Apple, saying they are long overdue for developers and other businesses to offer customers better features on their apps and devices.
Yahoo
7 hours ago
- Yahoo
The Spectrum: Teachers' retirement fund; Meta at the Statehouse; Ohio governor's race
COLUMBUS, Ohio (WCMH) – This week on The Spectrum: Lawmakers are looking to make changes to the board for the State Teachers Retirement System. 'Our consultants get rich, our investors get rich, our STRS management get rich,' CEO of the Ohio Retirement for Teachers Association Robin Rayfield said. 'The only people who don't get rich are the people who put the money in.' Hear from retirees on why they say it's nothing more than an attempt to strip away their power. A social media giant testifies at the Ohio Statehouse in support of a bill that would require parental consent for app downloads. 'This isn't really about tech companies,' Meta Head of Safety Policy for North America Jennifer Hanley said. 'It's about what parents want.' Hear from state leaders who said the proposal doesn't go far enough. Is the Republican primary for Ohio governor already over or is there still a way for another candidate, like Lt. Gov. Jim Tressel, to jump in and take on frontrunner Vivek Ramaswamy? Democratic strategist Brian Rothenberg and Republican strategist Will Hinman join the roundtable to weigh in. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


The Verge
7 hours ago
- The Verge
Android XR hands-on: Google's take on Meta Ray-Ban?
We just got a hands-on with Samsung's Project Moohan and Android XR prototype smart glasses at Google I/O. Android XR is a mixed reality OS designed for headsets and smart glasses, and it seems like Google is directly taking a page out of Meta's smart glasses playbook. Here's our hands-on with The Verge's Victoria Song.