Several people I follow have retweeted this link to an excellent Mathoverflow answer about robustness of mathematical proofs. Here's a quick thread on the topic. The short version of Mike Shulman's answer is that mathematical proofs are more to do with ideas and understanding 1/
than with formal verification. I agree with that analysis, but it leaves various very interesting questions not fully answered. I wish I could answer them myself, but the best I can do for now is offer a few observations. 2/
I think the two most obvious questions are (1) What are "ideas" and "understanding"? and (2) What is it about mathematical proofs that allows them to be "robust" and "fault tolerant"? I'll have more to say about the second. 3/
An useful thing to bear in mind is that not all mathematical proofs are robust. This is nicely illustrated by taking well-known NP-complete problems in graph theory and looking at random instances with various different edge probabilities. 4/
For example, let's take the problem of finding a Hamilton cycle -- that is, a cycle that visits every vertex of a graph exactly once. And let's suppose that we take a random graph with edge probability p. If p is large, then finding a Hamilton cycle is easy: you just ... 5/
keep on visiting a vertex you haven't yet visited, and you won't get stuck till right near the end, at which point you make a few small adjustments and find your cycle. If p is small but a Hamilton cycle is "planted" in the graph, then it will be easy to find for ... 6/
more or less the opposite reason: at each step you will be so short of options that your move is more or less forced. But there is a sweet spot in between where there are lots of tempting long paths, only a tiny proportion of which turn into Hamilton cycles, and ... 7/
few enough edges that if you get near the end and fail to find a cycle, you basically have to junk what you've done and start again. Or if you think you've found a Hamilton cycle and suddenly notice you've missed out a vertex, this near miss will be of no help. 8/
That last possibility is an example of a fragile, non-robust proof that the graph contains a Hamilton cycle: it is wrong, and it can't be locally corrected, as it could in the dense case.
It's clear that at least one way in which a proof can be robust is if it is "locally correctable", and one very good way for a proof to be locally correctable is if it is modular -- that is, made up of lots of small pieces that can be put together. 10/
Talking of modularity of proofs, I recently had my attention drawn to this article, which I am intending to read soon. 11/
A word of caution though. I once found a nicely modular but wrong proof of Szemerédi's theorem. The mistake was nicely confined to one lemma, but unfortunately that lemma was false, as was the statement I needed it for, as was the statement I needed *that* for. 12/
Even then the modularity was useful though, as it made it very clear what still needed to be done, and a couple of years down the line I eventually got there -- though with a much less simple argument. 13/
So why is so much mathematics robust? The best I can do here is to say that a huge process of selection goes on. The fact that some funny graph happens to contain a Hamilton cycle that can only be found after a huge brute-force search is something we tend to dislike, 14/
saying that there aren't any interesting ideas. So I think that robustness, modularity, ideas, understanding, etc. are all parts of that tiny fraction of statements that we find interesting and are capable of finding proofs of. It's a bit like the anthropic principle 15/
but less dubious: we can actually see that the mathematics we like sits inside a vast "multiverse" of mathematical statements that we don't like. 16/16
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I'm very sad to hear that Daniel Dennett has died. I greatly enjoyed his books Consciousness Explained and Elbow Room, and I hope I won't annoy too many people if I express my opinion that what he said in those books was basically right. 1/
For instance, I agree with him that computers could in principle be conscious (but would be very cautious about making such a claim of an actual computer), and also that free will can be reasonably defined in a way that makes it entirely compatible with determinism. 2/
I briefly met him once, after lunch at Trinity where he had been a guest of Simon Blackburn. IIRC he had been giving a talk on why religion exists: he politely listened to my suggestion that confusing correlation with causation might have had a lot to do with it. 3/
Today I start my seventh decade, so here are a few reflections on what it's like to reach that milestone. 🧵
1. I've had an extremely fortunate life so far. Of course, nobody reaches the age of 60 without some bad things happening, but I've had a lot less bad than my fair share and a lot more good.
2. With each passing decade I get that much more aware of the finiteness of my life, but turning 60 is a big step up in that respect from turning 50. I have people basically my age talking about retirement, for example.
My son has just started calculus, and I asked him what the relationship was between the gradients of the tangent and the normal to a curve at a given point. His first reply was, "They are perpendicular." I've noticed many times that something one gains with experience ... 1/7
in mathematics is an acute sensitivity to types. An experienced mathematician could not give that answer, for the simple reason that gradients are real numbers and two real numbers cannot be perpendicular to each other. 2/7
It didn't take long for him to correct himself and give the answer I had been looking for, but the point remains: get someone into the habit of being aware of the type of everything they are talking about, and their mathematical thinking automatically becomes much clearer. 3/7
I have often seen statistics like this, and am very much in favour of curbing the high-emitting activities of the rich (and while there are plenty of people richer than I am, I am not excluding myself from the people whose emissions must be curbed).
there is an important calculation that economists must have done somewhere, which I have not managed to find, concerning what the effects would be on emissions of a big redistribution of wealth. 2/
On the face of it, the argument looks simple: the rich are responsible for the lion's share of emissions, so if we redistributed in such a way as to bring the rich down to a lower level of wealth, we would have made big progress, by dealing with that lion's share. 3/
It's an amazing time to be alive for a combinatorialist at the moment, with a number of long-standing problems, several of them personal favourites of mine, being resolved. Today I woke up to the news of yet another breakthrough, due to Sam Mattheus and Jacques Verstraete. 🧵
A month or two ago I tweeted about a stunning new result that obtained an exponential improvement for the upper bound for the Ramsey number R(k,k), a problem I had thought about a lot. When I felt stuck on that, I would sometimes turn my attention to a related problem
that felt as though it ought to be easier: estimating the Ramsey number R(4,k). This is the smallest n such that every graph contains either a K_4 (that is, four vertices all joined to each other) or an independent set of size k (that is, k vertices not joined at all).
I was at a sensational combinatorics seminar in Cambridge yesterday, reminiscent of the time I had been tipped off that Andrew Wiles's seminar at the Newton Institute on Wednesday 23rd June 1993 might be worth going to. 🧵
The speaker was my colleague Julian Sahasrabudhe, who announced that he, Marcelo Campos, Simon Griffiths and Rob Morris had obtained an exponential improvement to the upper bound for Ramsey's theorem.
Ramsey's theorem says that for any number k there is a number R(k) with the property that if you have R(k) people in a room and any two of them are either friends or enemies, then you can find k people who are either all friends of each other or all enemies of each other.