Latest news with #GregorDolinar


Indian Express
5 hours ago
- Science
- Indian Express
Teens outperform Gemini, ChatGPT at top international math Olympiad
With the growing trends around artificial intelligence (AI), several industries are incorporating tools to make efficient. However, a group of teens at the International Mathematical Olympiad (IMO) beat several AI platforms, such as Google's Gemini and Sam Altman's ChatGPT. Held in Queensland, Australia, the 2025 edition of the global competition consisted of 641 young mathematicians under the age of 20 from 112 countries, five of whom achieved perfect scores of 42 points, something neither AI model could replicate, a report in Popular Science stated. Google announced that its advanced Gemini chatbot managed to solve five of the six problems presented at the competition, earning a total of 35 out of 42 points, a gold-medal score. 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score,' IMO president Gregor Dolinar stated in a quote shared by the tech giant, the report said. 'Their solutions were astonishing in many respects. IMO graders found them to be clear, precise, and most of them easy to follow,' he added. OpenAI, creator of ChatGPT, also confirmed that its latest experimental reasoning model achieved a score of 35 points, the report added. According to OpenAI researcher Alexander Wei, the company evaluated its models using the same rules as the teen competitors. 'We evaluated our models on the 2025 IMO problems under the same rules as human contestants. For each problem, three former IMO medalists independently graded the model's submitted proof,' Wei wrote on social media, as per the report. This year marks a significant leap for AI in math competitions. In 2024, Google's model earned a silver medal in Bath, UK, solving four out of six problems. That attempt took two to three days to solve. In contrast, the latest Gemini model completed this year's test within the official 4.5-hour time limit. The IMO acknowledged that technology firms had 'privately tested closed-source AI models on this year's problems,' which were the same as those faced by the human contestants.


NDTV
a day ago
- Science
- NDTV
Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores
At the International Mathematical Olympiad (IMO) held this month in Queensland, Australia, human participants triumphed over cutting-edge artificial intelligence models developed by Google and OpenAI. For the first time, these AI models achieved gold-level scores in the prestigious competition. Google announced on Monday that its advanced Gemini chatbot successfully solved five out of six challenging problems. However, neither Google's Gemini nor OpenAI's AI reached a perfect score. In contrast, five talented young mathematicians under the age of 20 achieved full marks, outperforming the AI models. The IMO, regarded as the world's toughest mathematics competition for students, showcased that human intuition and problem-solving skills still hold an edge over AI in complex reasoning tasks. This result highlights that while generative AI is advancing rapidly, it has yet to surpass the brightest human minds in all areas of intellectual competition. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.


Daily Tribune
a day ago
- Science
- Daily Tribune
Humans beat AI gold-level score at top maths contest
Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time. Neither model scored full marks -- unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points -- a gold medal score,' the US tech giant cited IMO president Gregor Dolinar as saying. 'Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow.' Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result 'achieved a longstanding grand challenge in AI' at 'the world's most prestigious math competition', OpenAI researcher Alexander Wei wrote on social media. 'We evaluated our models on the 2025 IMO problems under the same rules as human contestants,' he said. 'For each problem, three former IMO medalists independently graded the model's submitted proof.'


Business Recorder
a day ago
- Science
- Business Recorder
Humans beat AI gold-level score at top maths contest
SYDNEY: Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time. Neither model scored full marks — unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score,' the US tech giant cited IMO president Gregor Dolinar as saying. 'Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow.' Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result 'achieved a longstanding grand challenge in AI' at 'the world's most prestigious math competition', OpenAI researcher Alexander Wei wrote on social media. 'We evaluated our models on the 2025 IMO problems under the same rules as human contestants,' he said. 'For each problem, three former IMO medalists independently graded the model's submitted proof.' Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation — far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had 'privately tested closed-source AI models on this year's problems', the same ones faced by 641 competing students from 112 countries.
Yahoo
2 days ago
- Science
- Yahoo
Humans beat AI at annual math Olympiad, but the machines are catching up
Sydney — Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, but the programs reached gold-level scores for the first time, and the rate at which they are improving may be cause for some human introspection. Neither of the AI models scored full marks — unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six math problems set at the IMO, held in Australia's Queensland this month. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points - a gold medal score," the U.S. tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10% of human contestants won gold-level medals, and five received perfect scores of 42 points. U.S. ChatGPT maker OpenAI said its experimental reasoning model had also scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition," OpenAI researcher Alexander Wei said in a social media post. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the city of Bath, in southwest England, solving four of the six problems. That took two to three days of computation — far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems," the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organizers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he noted. In an interview with CBS' 60 Minutes earlier this year, one of Google's leading AI researchers predicted that within just five to 10 years, computers would be made that have human-level cognitive abilities — a landmark known as "artificial general intelligence." Google DeepMind CEO Demis Hassabis predicted that AI technology was on track to understand the world in nuanced ways, and to not only solve important problems, but even to develop a sense of imagination, within a decade, thanks to an increase in investment. "It's moving incredibly fast," Hassabis said. "I think we are on some kind of exponential curve of improvement. Of course, the success of the field in the last few years has attracted even more attention, more resources, more talent. So that's adding to the, to this exponential progress." Detroit lawnmower gang still going strong after 15 years Legendary singer Ozzy Osbourne dies at 76 Sneak peek: The Case of the Black Swan (Part 1) Solve the daily Crossword