logo
#

Latest news with #GeminiDeepThink

OpenAI and Google win at the world's most prestigious math competition
OpenAI and Google win at the world's most prestigious math competition

Euronews

time5 hours ago

  • Science
  • Euronews

OpenAI and Google win at the world's most prestigious math competition

Artificial intelligence (AI) models were put to the test this weekend to find out who was the best so-called mathlete at the world's most prestigious competition in Australia. Google's DeepMind and OpenAI, which makes ChatGPT, say they both achieved a gold medal-level performance at this year's International Mathematical Olympiad (IMO), thoughonly Google had actually entered the competition. The IMO confirmed DeepMind's results, whereas OpenAI evaluated its model on the 2025 IMO problems and self-published its results before official verification. Alex Wei, a research scientist at OpenAI working on large language models (LLMs) and reasoning, announced the results on his X account. An advanced version of DeepMind's Gemini Deep Think solved five out of the six IMO problems perfectly, earning 35 total points and achieving gold-medal level performance. OpenAI's model also solved five out of the six IMO problems and had the same score. Both models show how far AI has come since the technology catapulted with the launch of ChatGPT in November 2022. The math test in itself is very hard and only about 10 per cent of the 630 competitors received a gold medal this year. Participants from more than 100 countries entered the competition, which is aimed at elite high-school students. Those under the age of 20 can apply. 'When we first started OpenAI, this was a dream but not one that felt very realistic to us; it is a significant marker of how far AI has come over the past decade,' OpenAI CEO Sam Altman wrote on X in reference to the math competition. He added that the company will 'soon' release a new version, GPT-5, but that it doesn't plan 'to release a model with IMO gold level of capability for many months'. Meanwhile, Google wrote in a blog post: "It is a significant marker of how far AI has come over the past decade". The company participated in the competition last year and won a silver medal. "Our leap from silver to gold medal-standard in just one year shows a remarkable pace of progress in AI," Google said. However, both companies celebrated the human participants and avoided framing the competition as a man versus machine challenge. Wei called them "some of the brightest young minds of the future" and said that OpenAI employs some former IMO competitors.

World's First AI Model Wins Gold At International Math Olympiad. Check Details
World's First AI Model Wins Gold At International Math Olympiad. Check Details

NDTV

time6 hours ago

  • Science
  • NDTV

World's First AI Model Wins Gold At International Math Olympiad. Check Details

Google's artificial intelligence (AI) research arm DeepMind has won a gold medal at the International Mathematical Olympiad (IMO), the world's most prestigious competition for young mathematicians. It is the first time a machine has solved five of the six problems in algebra, combinatorics, geometry, and number theory -- signalling a breakthrough in math capabilities of AI systems that can rival human intelligence. IMO problems are known for their difficulty, and solving them requires a deep understanding of mathematical concepts -- something which the AI models had not been able to achieve up until now. However, an advanced version of Gemini Deep Think managed to ace the competition where 67 contestants, or about 11 per cent, achieved gold-medal scores. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points, a gold medal score. Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow," said IMO President Dr Gregor Dolinar. Last year, DeepMind's combined AlphaProof and AlphaGeometry 2 systems achieved the silver-medal standard in the competition, but it took two to three days of computation. This year, the advanced Gemini model operated end-to-end in natural language and managed to produce the results within the 4.5-hour competition time limit. The DeepMind team trained the advanced Gemini model on novel reinforcement learning techniques that can leverage more multi-step reasoning, problem-solving and theorem-proving data. "We'll be making a version of this Deep Think model available to a set of trusted testers, including mathematicians, before rolling it out to Google AI Ultra subscribers," DeepMind CEO Demis Hassabis wrote on X (formerly Twitter). Prior to Google, an OpenAI researcher also claimed that the startup had built technology that achieved a similar score on this year's questions, though it did not officially enter the competition. 1/N I'm excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world's most prestigious math competition—the International Math Olympiad (IMO). — Alexander Wei (@alexwei_) July 19, 2025 The advancements shown by the AI systems suggest that the technology was less than a year away from being used by mathematicians to crack unsolved research problems at the frontier of the field. "I think the moment we can solve hard reasoning problems in natural language will enable the potential for collaboration between AI and mathematicians," Junehyuk Jung, a math professor at Brown University and visiting researcher in DeepMind AI unit, was quoted as saying by Reuters.

OpenAI and Google both won gold at 2025 International Math Olympiad: Full story in 5 points
OpenAI and Google both won gold at 2025 International Math Olympiad: Full story in 5 points

India Today

time7 hours ago

  • Business
  • India Today

OpenAI and Google both won gold at 2025 International Math Olympiad: Full story in 5 points

In a first for artificial intelligence, OpenAI and Google have announced that their AI models have scored gold medal-worthy results at the 2025 International Mathematical Olympiad (IMO), a prestigious global competition for high school students. The development is being seen as a landmark moment in the race to build AI systems that can reason like humans and solve complex academic Google won gold at 2025 International Math Olympiad: Full story in 5 points-OpenAI and Google used advanced reasoning-based AI models that worked through natural language rather than relying on traditional mathematical programming methods. Both companies' systems successfully solved five out of six problems, a score that crosses the threshold for a gold medal at the IMO. This is the first time any AI models have managed to reach that level of accuracy in the competition's history.-The 66th IMO was held on Australia's Sunshine Coast, with 630 student participants. Alongside them, Google's DeepMind AI unit officially took part with its "Gemini Deep Think" model, which had been introduced earlier at the company's I/O event in May. The model managed to work through all problems in the same 4.5-hour time frame given to human participants, using plain English to process and solve the questions. -On the other hand, OpenAI did not officially enter the contest but later shared that its own experimental model had achieved similar gold-level scores when given the same problems. OpenAI's scores were verified by three independent IMO medalists, according to the company. The model used a new method involving massively scaled-up "test-time compute", which essentially means the system was allowed to run longer and use greater computing power to think through multiple approaches in parallel. OpenAI researcher Noam Brown described the effort as computationally 'very expensive'.-While Google's DeepMind had its results verified and certified by the IMO's committee, OpenAI revealed its achievement after the official competition results were made public. Both companies respected the IMO board's condition to delay announcements until the student rankings had been confirmed.-The achievement has sparked optimism among researchers. Professor Junehyuk Jung from Brown University — himself a former IMO gold medalist — said that this progress shows how close AI is to playing a supporting role in solving high-level research problems in mathematics. According to Google, this breakthrough is not just about solving maths problems. It's about demonstrating that AI systems are now capable of applying logic and reasoning, not just in maths but potentially in fields like physics and theoretical computer science. While OpenAI confirmed it won't release such high-level mathematical tools to the public immediately, it hinted that the capabilities could soon extend beyond math.- Ends

AI systems from Google and OpenAI soar at global maths competition
AI systems from Google and OpenAI soar at global maths competition

TimesLIVE

time12 hours ago

  • Business
  • TimesLIVE

AI systems from Google and OpenAI soar at global maths competition

OpenAI's breakthrough was achieved with a new experimental model centered on massively scaling up "test-time compute". This was done by allowing the model to "think" for longer periods and deploying parallel computing power to run many lines of reasoning simultaneously, according to Noam Brown, researcher at OpenAI. Brown declined to say how much computing power it cost OpenAI, but called it "very expensive". To OpenAI researchers, it is another clear sign AI models can command extensive reasoning capabilities that could expand into areas beyond maths. The optimism is shared by Google researchers, who believe AI models' capabilities can apply to research quandaries in other fields such as physics, said Jung, who won an IMO gold medal as a student in 2003. Of the 630 students participating in the 66th IMO on the Sunshine Coast in Queensland, Australia, 67 contestants, or about 11%, achieved gold medal scores. Google's DeepMind AI unit last year achieved a silver medal score using AI systems specialised for maths. This year, Google used a general-purpose model called Gemini Deep Think, a version of which was previously unveiled at its annual developer conference in May. Unlike previous AI attempts that relied on formal languages and lengthy computation, Google's approach this year operated entirely in natural language and solved the problems within the official 4.5-hour time limit, the company said in a blog post. OpenAI, which has its own set of reasoning models, similarly built an experimental version for the competition, according to a post by researcher Alexander Wei on social media platform X. He noted the company does not plan to release anything with this level of maths capability for several months. This year marked the first time the competition coordinated officially with some AI developers, who have for years used prominent maths competitions such as IMO to test model capabilities. IMO judges certified the results of the companies, including Google, and asked them to publish results on July 28. "We respected the IMO board's original request that all AI labs share their results only after the official results had been verified by independent experts and the students had rightly received the acclamation they deserved," Google DeepMind CEO Demis Hassabis said on X on Monday. OpenAI, which published its results on Saturday and first claimed gold medal status, said in an interview it had permission from an IMO board member to do so after the closing ceremony on Saturday. The competition on Monday allowed cooperating companies to publish results, said Gregor Dolinar, president of IMO's board.

AI models of Google and OpenAI win milestone gold at global math contest
AI models of Google and OpenAI win milestone gold at global math contest

Business Standard

time12 hours ago

  • Science
  • Business Standard

AI models of Google and OpenAI win milestone gold at global math contest

Alphabet's Google and OpenAI said their artificial-intelligence models won gold medals at a global mathematics competition, signaling a breakthrough in math capabilities in the race to build powerful systems that can rival human intelligence. The results marked the first time that AI systems crossed the gold-medal scoring threshold at the International Mathematical Olympiad for high-school students. Both companies' models solved five out of six problems, achieving the result using general-purpose "reasoning" models that processed mathematical concepts using natural language, in contrast to the previous approaches used by AI firms. The achievement suggests AI is less than a year away from being used by mathematicians to crack unsolved research problems at the frontier of the field, according to Junehyuk Jung, a math professor at Brown University and visiting researcher in Google's DeepMind AI unit. "I think the moment we can solve hard reasoning problems in natural language will enable the potential for collaboration between AI and mathematicians," Jung told Reuters. OpenAI's breakthrough was achieved with a new experimental model centered on massively scaling up "test-time compute." This was done by both allowing the model to "think" for longer periods and deploying parallel computing power to run numerous lines of reasoning simultaneously, according to Noam Brown, researcher at OpenAI. Brown declined to say how much in computing power it cost OpenAI, but called it "very expensive." To OpenAI researchers, it is another clear sign that AI models can command extensive reasoning capabilities that could expand into other areas beyond math. The optimism is shared by Google researchers, who believe AI models' capabilities can apply to research quandaries in other fields such as physics, said Jung, who won an IMO gold medal as a student in 2003. Google's DeepMind AI unit last year achieved a silver medal score using AI systems specialized for math. This year, Google used a general-purpose model called Gemini Deep Think, a version of which was previously unveiled at its annual developer conference in May. Unlike previous AI attempts that relied on formal languages and lengthy computation, Google's approach this year operated entirely in natural language and solved the problems within the official 4.5-hour time limit, the company said in a blog post. OpenAI, which has its own set of reasoning models, similarly built an experimental version for the competition, according to a post by researcher Alexander Wei on social media platform X. He noted that the company does not plan to release anything with this level of math capability for several months. This year marked the first time the competition coordinated officially with some AI developers, who have for years used prominent math competitions like IMO to test model capabilities. IMO judges certified the results of those companies, including Google, and asked them to publish results on July 28. "We respected the IMO Board's original request that all AI labs share their results only after the official results had been verified by independent experts and the students had rightly received the acclamation they deserved," Google DeepMind CEO Demis Hassabis said on X on Monday. OpenAI, which published its results on Saturday and first claimed gold-medal status, said in an interview that it had permission from an IMO board member to do so after the closing ceremony on Saturday. The competition on Monday allowed cooperating companies to publish results, Gregor Dolinar, president of IMO's board, told Reuters.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store