Earlier this month, an artificial intelligence (AI) startup announced that its AI agent had confirmed a proof of two instances of the fiendishly challenging “higher-dimensional sphere packing problem.” In 2022, the evidence educated Ukrainian mathematician Maryna Viazovska one Field’s Medalone of the most prestigious prizes in mathematics.
This was a giant step forward, and speaks to the emergence of a quiet revolution in the field.
On the surface, it might not seem so extraordinary. After all, mathematicians have long used tools to expand their abilities — protractors, slide rules, calculators, and eventually computers. Yet none of these tools replaced mathematicians; they allowed us to turn our attention to more interesting problems. The arrival of AI in mathematics can feel like a new step in the same process. But there is a crucial difference: This time the tools don’t just help us calculate; they help us reason, or at least perform many of the routines that underlie human reasoning.
The article continues below

Kit Yates is Professor of Mathematical Biology and Public Engagement at the University of Bath in the UK
The change has been going on for a while. For years, our greatest proofs have not been the attempts of individual mathematicians. Many modern research papers in pure mathematics now rely on huge conceptual frameworks, long chains of dependencies, and catalogs of results that no single person can fully internalize. Computers have played a role in major evidence before, such as four-color theorem and that Kepler conjecture. But what is changing now is the level of autonomy and reliability we can expect from AI systems working alongside formal proof assistants – programs designed to check mathematical arguments.
But until recently, turning cutting-edge proofs into machine-checkable form required specialists to devote months or years to the work.
These formal proof languages express mathematical arguments in a way that a computer can check step by step, guaranteeing that each part of the proof is logically sound. Take the language Leanfor example. Unlike ordinary mathematical writing, Lean requires every definition and inference to be made explicitly, and it checks every step mechanically and methodically. It’s unforgivable, but in a productive way: If the argument is approved by Lean, it means in theory that the proof has no hidden assumptions or leaps of faith. In recent years, Lean has become a testing ground for research-level mathematics, and mathematicians have built “libraries” to support increasingly complex problems.
These libraries are huge collections of definitions and already verified theorems that have been painstakingly programmed so that researchers can prove new results in the language. But until recently, turning cutting-edge proofs into machine-checkable form required specialists to devote months or years to the work.
That is the context in which the recent formal verification of Viazovska’s higher dimensional globular packing results should be understood. The sphere-packing problem asks how tightly identical spheres can be packed together in space of all dimensions, not just the 3D world we live in. Before Viazovska’s breakthrough, the sphere-packing problem had only been fully solved in dimensions one, two, and three, with all higher-dimensional boxes remaining open. Viazovska’s proof of eight- and 24-dimensional sphere packing problemare deep bits of mathematical insight that solve problems previously thought out of reach.
Fields Medal level
The last important step forward is that a human-AI collaboration has now translated these arguments into fully verified Lean code, which then checked each step. The scale of this achievement is astonishing; These are recent Fields Medal-level results, and they have now been certified to a level of detail and certainty that would be impossible for individual judges, or even large human specialist teams, to reproduce without assistance.
A key ingredient was Math, Inc.AI reasoning agent Gauss who had played an important role in helping turn human mathematical arguments into Lean proofs. The AI system didn’t work entirely without help; mathematicians still had to set up the plan, shape the overall structure, and make sure the right concepts were in place. But once the scaffolding was in place, the system could fill in the missing pieces with extraordinary speed. In the eight-dimensional case, it completed work that the human contributors had estimated would take them months, and it did so in days. The 24-dimensional case, which is even more intricate, followed soon after.
The ball packing project is probably the clearest demonstration yet of what is becoming possible.
This is more than a technical achievement. It points to a shift in the way mathematicians can organize their work. When I spoke with the UCLA mathematician and Fields Medalist Terence Taohe suggested that the immediate value of artificial intelligence might come not from solving our hardest problems directly, but from freeing us from the drudgery—the thousand little things that are conceptually simple but too time-consuming for a person to handle by hand.
Some AI systems, he argued, are already surprisingly good at handling these tasks, allowing mathematicians to devote their attention to strategy rather than bookkeeping. Tools like Lean matter because they give us a way to separate the creativity of generating ideas from the rigor of checking them.
AI proof expert Kevin Buzzardfrom Imperial College London, expressed a complementary view. He worries, rightly, about the dangers of relying on large language models that sound authoritative without guaranteeing correctness. But he also argues that formalization offers a way through this. In Lean, if the program accepts all the steps, it is a valid proof. This does not mean that the computer has necessarily done anything “intelligent”, but rather that the formal verification language leaves no room for hidden steps or suggestive-but-incomplete arguments. The challenge, as he sees it, is that most modern mathematics still hasn’t been translated into formal libraries, so the systems don’t yet have the concepts they need.
This latest step forward suggests that the gap is beginning to close. The bullet packing project is probably the clearest demonstration yet of what is becoming possible.
None of this means that mathematicians are on the brink of extinction. In fact, I suspect the opposite is true. As the space for verifiable mathematics expands, so does the need for people who can ask good questions, create new definitions and recognize when an argument is genuinely insightful. But we have to adapt. We may find ourselves acting more like scientific instrument builders and less like lone theorists, weaving together human intuition and AI persistence to produce machine-verified security.
Mathematics has always progressed by collaborating with auxiliary tools. AI does not change this practice; it just takes it to the next level. Mathematical concepts do not become easier to prove, but our capacity to test, verify and build on them will certainly increase.
Opinion on Live Science gives you insight into the most important questions in science affecting you and the world around you today, written by experts and leading researchers in their field.






