HorizonMath: Measuring AI Progress Toward Mathematical Discovery with Automatic Verification
Abstract
Can AI make progress on important, unsolved mathematical problems? Large language models are now capable of sophisticated mathematical and scientific reasoning, but whether they can perform novel research is still widely debated and underexplored. We introduce HorizonMath, a benchmark of over 100 predominantly unsolved problems spanning 8 domains in computational and applied mathematics, paired with an open-source evaluation framework for automated verification. Our benchmark targets a class of problems where discovery is hard, requiring meaningful mathematical insight, but verification is computationally efficient and simple. Because these solutions are unknown, HorizonMath is immune to data contamination, and most state-of-the-art models score near 0%. Existing research-level benchmarks instead rely on formal proof verification or manual review, both of which are expensive to scale. Using this platform, we find two problems for which GPT 5.4 Pro proposes solutions that improve on the best-known published results, representing potential novel contributions (pending expert review). We release HorizonMath as an open challenge and a growing community resource, where correct solutions to problems in the unsolved problem classes could constitute novel results in the mathematical literature.
Community
Measuring AI Progress on Mathematical Discovery with HorizonMath. GPT 5.4 Pro finds novel mathematical solutions.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AI for Mathematics: Progress, Challenges, and Prospects (2026)
- Construction-Verification: A Benchmark for Applied Mathematics in Lean 4 (2026)
- LemmaBench: A Live, Research-Level Benchmark to Evaluate LLM Capabilities in Mathematics (2026)
- Judging What We Cannot Solve: A Consequence-Based Approach for Oracle-Free Evaluation of Research-Level Math (2026)
- Code2Math: Can Your Code Agent Effectively Evolve Math Problems Through Exploration? (2026)
- Towards Autonomous Mathematics Research (2026)
- Benchmarks Saturate When The Model Gets Smarter Than The Judge (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper