February 10, 2025 report
The GIST Editors' notes
This text has been reviewed in response to Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas guaranteeing the content material's credibility:
fact-checked
preprint
trusted supply
proofread
DeepMind AI achieves gold-medal stage efficiency on difficult Olympiad math questions

A workforce of researchers at Google's DeepMind challenge, experiences that its AlphaGeometry2 AI carried out at a gold-medal stage when tasked with fixing issues that got to highschool college students collaborating within the Worldwide Mathematical Olympiad (IMO) over the previous 25 years. Of their paper posted on the arXiv preprint server, the workforce provides an summary of AlphaGeometry2 and its scores when fixing IMO issues.
Prior analysis has advised that AI that may resolve geometry issues might result in extra subtle apps as a result of they require each a excessive stage of reasoning capability and a capability to select from doable steps in working towards an answer to an issue.
To that finish, the workforce at DeepMind has been engaged on growing more and more subtle geometry-solving apps. Its first iteration was launched final January and was referred to as AlphaGeometry; its second iteration is known as AlphaGeometry2.
The workforce at DeepMind has been combining it with one other system they developed referred to as Alpha Proof, which conducts mathematical proofs. The workforce discovered it was in a position to resolve 4 of 6 issues listed within the IMO this previous summer season. For this new research, the analysis workforce expanded testing of the system's capability by giving it a number of issues utilized by the IMO over the previous 25 years.
The analysis workforce constructed AlphaGeometry2 by combining a number of core components, one among which is Google's Gemini language mannequin. Different components use mathematic guidelines to provide you with options to the unique downside or elements of it.
The workforce notes that to resolve many IMO issues, sure constructs should be added earlier than continuing, which suggests their system should have the ability to create them. Their system then tries to foretell which of these which were added to a diagram needs to be used to make the required deductions required to resolve an issue. AlphaGeometry2 suggests steps that may be used to resolve a given downside after which checks the steps for logic earlier than utilizing them.
To check their system, the researchers selected 45 issues from the IMO, a few of which required translating right into a extra useable kind, leading to 50 whole issues. They report that AlphaGeometry2 was in a position to resolve 42 of them appropriately, barely larger than the typical human gold medalist within the competitors.
Extra data: Yuri Chervonyi et al, Gold-medalist Efficiency in Fixing Olympiad Geometry with AlphaGeometry2, arXiv (2025). DOI: 10.48550/arxiv.2502.03544
Journal data: arXiv
© 2025 Science X Community
Quotation: DeepMind AI achieves gold-medal stage efficiency on difficult Olympiad math questions (2025, February 10) retrieved 11 February 2025 from https://techxplore.com/information/2025-02-deepmind-ai-gold-medal-olympiad.html This doc is topic to copyright. Other than any honest dealing for the aim of personal research or analysis, no half could also be reproduced with out the written permission. The content material is supplied for data functions solely.
Discover additional
DeepMind's AI system AlphaGeometry in a position to resolve advanced geometry issues at a excessive stage 34 shares
Feedback to editors
