logo
#

Latest news with #GregorDolinar

Human mathematicians best AI in competition
Human mathematicians best AI in competition

Express Tribune

time6 hours ago

  • Science
  • Express Tribune

Human mathematicians best AI in competition

Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programs reaching gold-level scores for the first time, reported AFP. Neither model scored full marks — unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 per cent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition," OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation - far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems," the same ones faced by 641 competing students from 112 countries.

Teens outperform Gemini, ChatGPT at top international math Olympiad
Teens outperform Gemini, ChatGPT at top international math Olympiad

Indian Express

time21 hours ago

  • Science
  • Indian Express

Teens outperform Gemini, ChatGPT at top international math Olympiad

With the growing trends around artificial intelligence (AI), several industries are incorporating tools to make efficient. However, a group of teens at the International Mathematical Olympiad (IMO) beat several AI platforms, such as Google's Gemini and Sam Altman's ChatGPT. Held in Queensland, Australia, the 2025 edition of the global competition consisted of 641 young mathematicians under the age of 20 from 112 countries, five of whom achieved perfect scores of 42 points, something neither AI model could replicate, a report in Popular Science stated. Google announced that its advanced Gemini chatbot managed to solve five of the six problems presented at the competition, earning a total of 35 out of 42 points, a gold-medal score. 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score,' IMO president Gregor Dolinar stated in a quote shared by the tech giant, the report said. 'Their solutions were astonishing in many respects. IMO graders found them to be clear, precise, and most of them easy to follow,' he added. OpenAI, creator of ChatGPT, also confirmed that its latest experimental reasoning model achieved a score of 35 points, the report added. According to OpenAI researcher Alexander Wei, the company evaluated its models using the same rules as the teen competitors. 'We evaluated our models on the 2025 IMO problems under the same rules as human contestants. For each problem, three former IMO medalists independently graded the model's submitted proof,' Wei wrote on social media, as per the report. This year marks a significant leap for AI in math competitions. In 2024, Google's model earned a silver medal in Bath, UK, solving four out of six problems. That attempt took two to three days to solve. In contrast, the latest Gemini model completed this year's test within the official 4.5-hour time limit. The IMO acknowledged that technology firms had 'privately tested closed-source AI models on this year's problems,' which were the same as those faced by the human contestants.

Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores
Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores

NDTV

time2 days ago

  • Science
  • NDTV

Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores

At the International Mathematical Olympiad (IMO) held this month in Queensland, Australia, human participants triumphed over cutting-edge artificial intelligence models developed by Google and OpenAI. For the first time, these AI models achieved gold-level scores in the prestigious competition. Google announced on Monday that its advanced Gemini chatbot successfully solved five out of six challenging problems. However, neither Google's Gemini nor OpenAI's AI reached a perfect score. In contrast, five talented young mathematicians under the age of 20 achieved full marks, outperforming the AI models. The IMO, regarded as the world's toughest mathematics competition for students, showcased that human intuition and problem-solving skills still hold an edge over AI in complex reasoning tasks. This result highlights that while generative AI is advancing rapidly, it has yet to surpass the brightest human minds in all areas of intellectual competition. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.

Humans beat AI gold-level score at top maths contest
Humans beat AI gold-level score at top maths contest

Daily Tribune

time2 days ago

  • Science
  • Daily Tribune

Humans beat AI gold-level score at top maths contest

Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time. Neither model scored full marks -- unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points -- a gold medal score,' the US tech giant cited IMO president Gregor Dolinar as saying. 'Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow.' Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result 'achieved a longstanding grand challenge in AI' at 'the world's most prestigious math competition', OpenAI researcher Alexander Wei wrote on social media. 'We evaluated our models on the 2025 IMO problems under the same rules as human contestants,' he said. 'For each problem, three former IMO medalists independently graded the model's submitted proof.'

Humans beat AI gold-level score at top maths contest
Humans beat AI gold-level score at top maths contest

Business Recorder

time2 days ago

  • Science
  • Business Recorder

Humans beat AI gold-level score at top maths contest

SYDNEY: Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time. Neither model scored full marks — unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score,' the US tech giant cited IMO president Gregor Dolinar as saying. 'Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow.' Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result 'achieved a longstanding grand challenge in AI' at 'the world's most prestigious math competition', OpenAI researcher Alexander Wei wrote on social media. 'We evaluated our models on the 2025 IMO problems under the same rules as human contestants,' he said. 'For each problem, three former IMO medalists independently graded the model's submitted proof.' Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation — far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had 'privately tested closed-source AI models on this year's problems', the same ones faced by 641 competing students from 112 countries.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store