72
$\begingroup$

I would like to ask about recent examples, mainly after 2015, where experimentation by computers or other use of computers has led to major mathematical advances.

This is a continuation of a question that I asked 11 years ago.

There are several categories:

A) Mathematical conjectures or large body of work arrived at by examining experimental data

B) Computer-assisted proofs of mathematical theorems

C) Computer programs that interactively or automatically lead to mathematical conjectures.

D) Various computer programs which allow proving automatically theorems or generating automatically proofs in a specialized field.

E) Computer programs (both general purpose and special purpose) for verification of mathematical proofs.

F) Large databases and other tools

Of course more resources (like this Wikipedia page on experimental mathematics) are also useful.

$\endgroup$
9
  • 4
    $\begingroup$ I would also like to know recent examples of results of the form "the probability that $X$ is true is at least $\alpha$" where $\alpha$ is very close to $1$ and where $X$ is a random variable that is evaluated experimentally with a computer. $\endgroup$ Jun 30, 2021 at 7:39
  • 5
    $\begingroup$ To qualify as 'major mathematical advance', do you have specific criterias in mind? For example Beeson, Narboux and Wiedijk have formalized all of Euclid in HOL light and Coq link.springer.com/article/10.1007/s10472-018-9606-x It allowed them to fix various flaws in Euclid : does this qualify? $\endgroup$
    – Archie
    Jun 30, 2021 at 11:04
  • 2
    $\begingroup$ @Archie, In my judgement it does and maybe I should change "major" to "substantial" or "important". $\endgroup$
    – Gil Kalai
    Jun 30, 2021 at 12:25
  • 5
    $\begingroup$ @GilKalai The title of your question is a little misleading. In your list, B and E often involve no "experimental mathematics" as that term is usually understood. Maybe your question should be titled, "The use of computers leading to major advances." $\endgroup$ Jun 30, 2021 at 14:45
  • 1
    $\begingroup$ Dear @TimothyChow, you are right. Thanks to you and to Matt F for the improvement. $\endgroup$
    – Gil Kalai
    Jun 30, 2021 at 19:42

13 Answers 13

60
$\begingroup$

There is the recent computer-assisted verification of some key statements by Scholze and Clausen about "condensed mathematics". The task has been accomplished by Buzzard, Commelin, and others (see comments below) using Lean, and it led to major media coverage. For instance, here is a related article that appeared on Nature on June 18, 2021.

$\endgroup$
3
  • 20
    $\begingroup$ Other people involved in this project: Adam Topaz, Riccardo Brasca, Patrick Massot, Scott Morrison, Bhavik Mehta, Filippo Nuccio, Heather Macbeth, Damiano Testa, Mario Carneiro and many other helpful contributions from people in the Lean community. $\endgroup$
    – jmc
    Jul 1, 2021 at 4:45
  • 10
    $\begingroup$ I concur with this -- in fact when it comes to actual lines of code written, several of the people in the above list wrote way more than me. Certainly I am a big proponent of formalising modern mathematics in a theorem prover, but most of the hard work here was done by others. $\endgroup$ Jul 1, 2021 at 6:56
  • 2
    $\begingroup$ Ok, thank you for providing the names of further contributors. I edited the answer accordingly. $\endgroup$ Jul 1, 2021 at 7:33
55
$\begingroup$

Here is an example of type A: Stavros Garoufalidis and Don Zagier have extensive work on refinements of Kashaev's Volume Conjecture (which relates the order of growth of the values of Jones polynomials of hyperbolic knots to the volume of their knot complement). Analyzing not just the main order of growth, but the whole asymptotic expansion, they have uncovered a vast amount of hidden structure; the key word here is the Quantum Modularity Conjecture (see equation (36)) of Zagier, but recently even further refinements have been found. This is experimental mathematics par excellence: They compute certain numbers to 1000 decimal places, figure out some asymptotic expansion, subtract it off, find that the remainder still has 500 significant places and admits an asymptotic expansion itself, ... .

This led to some concrete purely number-theoretic discoveries, such as an explicit description of an etale regulator map by Calegari--Garoufalidis--Zagier, and analogues of the Habiro ring for general number fields. Zagier is currently teaching a course.

$\endgroup$
34
$\begingroup$

Here is an interesting one. Reinforcement learning to generate counter-examples to several open conjectures in combinatorics and graph theory.

https://arxiv.org/abs/2104.14516

$\endgroup$
1
  • 6
    $\begingroup$ I rewrote the code a bit to make it easier to use. If you can use Python, you can just write a function which gives any graph a score, and it will find graphs with large score faster than brute force search. $\endgroup$ Jul 4, 2021 at 17:47
28
$\begingroup$

A fascinating recent example in category B is the progresses on Kazhdan's property (T) that were made after Ozawa's reformulation of property (T) in terms of semidefinite programming. This has lead in particular to the resolution (by the affirmative) of the long-standing open question whether automorphism groups of free groups of rank $\geq 4$ have property (T), by Kaluba-Nowak-Ozawa (rank $5$), Kaluba-Kielak-Nowak (rank $>5$) and Nitsche (rank $4$). This gives a rigourous explanation for the remarkable efficiency of the product replacement algorithm for the generation of random elements in a finite group.

This story has happened almost live on MO, see here for much more details.

$\endgroup$
27
$\begingroup$

What about Giles Gardam's construction of non-trivial units in the mod-2 group algebra of a torsion-free group https://arxiv.org/abs/2102.11818, which solved an 80-year old conjecture, and Alan Murray's extension of Giles Gardam's work to the mod-$p$ group algebra of the same group for odd primes https://arxiv.org/abs/2106.02147? I'm not sure which category to put it in. There is an interview with Gardam in which he says that the computing he did only required his laptop, but that it was comforting to know that he could have done more with a bigger computer if necessary.

$\endgroup$
26
$\begingroup$

Duplicating my comment : in 2015 Beeson, Narboux and Wiedijk have formalized all of Euclid in HOL light and Coq https://doi.org/10.1007/s10472-018-9606-x It allowed them to fix various flaws in Euclid.

$\endgroup$
1
  • $\begingroup$ +1 - Just spent 30 minutes skimming the paper. The title of Section 16: "How wrong was Euclid?". Full disclosure - in that section they only sketch the various problems they found. As they repeatedly note, mostly this paper is about the work needed to carry out the proof checking (of their formalised, and corrected!, proofs). They promise a future paper which will expand upon the fixes needed... $\endgroup$
    – Sam Nead
    Jan 4 at 22:17
24
$\begingroup$

For experimental mathematics as that term is usually understood, I would commend to your attention the paper by Roger Behrend, Ilse Fischer and Matjaž Konvalinka, "Diagonally and antidiagonally symmetric alternating sign matrices of odd order," Advances in Mathematics 315 (2017), 324–365, which won the 2019 AMS David P. Robbins Prize. In this paper, the authors proved a formula for the number of odd-order diagonally and antidiagonally symmetric alternating sign matrices, thus settling the last remaining of David Robbins’s conjectures on alternating sign matrices, which had stood for over thirty years.

One of the criteria for the Robbins Prize is that the work should include a significant experimental component. In this case, the authors' computer experimentation helped them discover the form of a crucial partition function.

$\endgroup$
22
$\begingroup$

Adding to Archie's answer, Marjin Heule and collaborators have proven two major questions in Ramsey theory on the integers using SAT solvers. The 2016 result resolving the boolean Pythagorean triples problem received a lot of media attention for the being the largest proof to date. The 2018 result determining the five color Schur number was much larger. For links, including the expository article "The Science of Brute Force," see his Carnegie Mellon webpage.

$\endgroup$
21
$\begingroup$

An example of type B (I think) is Marijn Heule's program to reduce the size of graphs satisfying a given property. As an application it allowed him to find 5-chromatic unit distance graphs smaller than Aubrey de Grey's original one, see https://arxiv.org/abs/1805.12181 and https://arxiv.org/abs/1907.00929

$\endgroup$
2
  • 13
    $\begingroup$ I think de Grey’s proof was also originally computer-assisted. $\endgroup$
    – Mark S
    Jul 1, 2021 at 2:13
  • 3
    $\begingroup$ Aubrey de Grey's paper won the 2020 MAA Robbins Prize. $\endgroup$ Jul 2, 2021 at 13:15
16
$\begingroup$

An example of type C. DeepMind launched an endeavor for using machine learning (and deep learning in particular) for finding conjectures based on data. Two recent outcomes are toward the Dyer-Lusztig conjecture (Charles Blundell, Lars Buesing, Alex Davies, Petar Veličković, Geordie Williamson) and for certain new invariants in knot theory (Alex Davies, András Juhász, Marc Lackenby, Nenad Tomasev).

$\endgroup$
5
  • 8
    $\begingroup$ It's interesting that these are both in cases where there was reason to believe that A might depend only on B, where A and B are two invariants, but how A depends on B exactly was mysterious. These cases seem uncommon, but I guess it would be good for mathematicians to think if there are any examples in their own areas of research... $\endgroup$
    – Will Sawin
    Dec 2, 2021 at 22:10
  • 1
    $\begingroup$ The second article just appeared in Nature, here: nature.com/articles/s41586-021-04086-x (open access). $\endgroup$ Dec 3, 2021 at 1:30
  • $\begingroup$ @WillSawin Perhaps the Lovász conjecture has this form. But the Lovász conjecture may be false. $\endgroup$ Dec 3, 2021 at 14:21
  • $\begingroup$ @TimothyChow You're suggesting one could look for an algorithm to find a Hamiltonian path? $\endgroup$
    – Will Sawin
    Dec 3, 2021 at 14:25
  • $\begingroup$ @WillSawin Right. This is maybe more plausible for the weaker version of the conjecture, that a Cayley graph is Hamiltonian. Can one algorithmically exploit the structure of the group to find a Hamiltonian path? One could of course start with special classes of groups. $\endgroup$ Dec 3, 2021 at 14:27
7
$\begingroup$

The paper Constructions in combinatorics via neural networks by Adam Zsolt Wagner uses reinforcement learning algorithm and the deep cross-entropy method, to find explicit constructions and counterexamples to several open conjectures in extremal combinatorics and graph theory! Let me also mention the paper Refuting conjectures in extremal combinatorics via linear programming also by Adam Zsolt Wagner. (I thank Geordie Williamson for telling me about Adam's work.)

$\endgroup$
6
$\begingroup$

At ICM 2022 Kevin Buzzard gave a talk "The rise of formalism in mathematics" about computer verification of mathematical proofs (item E). It is about writing large chunks of mathematics in "lean" (or other systems.)

(As far as I could see the talk did not address the question of (automated) "compilers" to transform humanly written proof to formal proofs.

Buzzard's public lecture followed a suggestion here at MO in response to this question Suggestions for special lectures at next ICM

Video

https://youtu.be/SEID4XYFN7o

$\endgroup$
2
$\begingroup$

Recently, Merino, Mütze, and Namrata proved that all Kneser graphs besides the Peterson graph are Hamiltonian (link). This is a very nice special case of Lovász's conjecture about vertex-transitive graphs.

In Subsection 4.3, they rely upon an intricate collection of regular expressions, which was found experimentally with computer existence. To quote the last paragraph of that subsection:

The reader may rightfully wonder how on earth one would come up with the set of nine regular expressions stated in (53). In fact, they were found following a computer-guided experimental approach of trial and error. We started with a much smaller and simpler set of rules that would guarantee lexicographic improvement along the connectors. By writing a computer program that tests small cases, we figured out that those initial rules still allowed some conflicts, i.e., not all pairs of connectors were disjoint. This led to modifying the existing rules and adding new rules that would prevent those particular conflicts, sometimes creating new unforeseen conflicts, as reported by our program. We used this approach of successive refinement until eventually a conflict-free set of rules was confirmed by computer for small cases, which we then verified theoretically for all cases.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct.

Not the answer you're looking for? Browse other questions tagged or ask your own question.