logo
#

Latest news with #AlexanderWei

Humans Beat ChatGPT And OpenAI At Top Math Olympiad
Humans Beat ChatGPT And OpenAI At Top Math Olympiad

NDTV

time5 hours ago

  • Science
  • NDTV

Humans Beat ChatGPT And OpenAI At Top Math Olympiad

Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time. Neither model scored full marks -- unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points -- a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.

Humans beat AI gold-level score at top math contest
Humans beat AI gold-level score at top math contest

GMA Network

time5 hours ago

  • Science
  • GMA Network

Humans beat AI gold-level score at top math contest

SYDNEY — Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programs reaching gold-level scores for the first time. Neither model scored full marks—unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six math problems set at the IMO, held in Australia's Queensland this month. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points—a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition," OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation—far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems," the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organizers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned. — Agence France-Presse

Humans beat AI gold-level score at top maths contest
Humans beat AI gold-level score at top maths contest

Mint

time9 hours ago

  • Science
  • Mint

Humans beat AI gold-level score at top maths contest

Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time. Neither model scored full marks -- unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points -- a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar.

Humans beat AI gold-level score at top maths contest
Humans beat AI gold-level score at top maths contest

Al Etihad

time9 hours ago

  • Science
  • Al Etihad

Humans beat AI gold-level score at top maths contest

22 July 2025 13:28 Sydney (AFP)Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first model scored full marks -- unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years said on Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month."We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points -- a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying."Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow."Around 10% of human contestants won gold-level medals, and five received perfect scores of 42 ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media."We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said."For each problem, three former IMO medalists independently graded the model's submitted proof."Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries."It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.

Humans beat AI gold-level score at top maths contest
Humans beat AI gold-level score at top maths contest

Time of India

time10 hours ago

  • Science
  • Time of India

Humans beat AI gold-level score at top maths contest

Academy Empower your mind, elevate your skills Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first model scored full marks -- unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month."We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points -- a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying."Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow."Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media."We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said."For each problem, three former IMO medalists independently graded the model's submitted proof."Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries."It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store