Several people I follow have retweeted this link to an excellent Mathoverflow answer about robustness of mathematical proofs. Here's a quick thread on the topic. The short version of Mike Shulman's answer is that mathematical proofs are more to do with ideas and understanding 1/
than with formal verification. I agree with that analysis, but it leaves various very interesting questions not fully answered. I wish I could answer them myself, but the best I can do for now is offer a few observations. 2/
I think the two most obvious questions are (1) What are "ideas" and "understanding"? and (2) What is it about mathematical proofs that allows them to be "robust" and "fault tolerant"? I'll have more to say about the second. 3/
An useful thing to bear in mind is that not all mathematical proofs are robust. This is nicely illustrated by taking well-known NP-complete problems in graph theory and looking at random instances with various different edge probabilities. 4/
For example, let's take the problem of finding a Hamilton cycle -- that is, a cycle that visits every vertex of a graph exactly once. And let's suppose that we take a random graph with edge probability p. If p is large, then finding a Hamilton cycle is easy: you just ... 5/
keep on visiting a vertex you haven't yet visited, and you won't get stuck till right near the end, at which point you make a few small adjustments and find your cycle. If p is small but a Hamilton cycle is "planted" in the graph, then it will be easy to find for ... 6/
more or less the opposite reason: at each step you will be so short of options that your move is more or less forced. But there is a sweet spot in between where there are lots of tempting long paths, only a tiny proportion of which turn into Hamilton cycles, and ... 7/
few enough edges that if you get near the end and fail to find a cycle, you basically have to junk what you've done and start again. Or if you think you've found a Hamilton cycle and suddenly notice you've missed out a vertex, this near miss will be of no help. 8/
That last possibility is an example of a fragile, non-robust proof that the graph contains a Hamilton cycle: it is wrong, and it can't be locally corrected, as it could in the dense case.
It's clear that at least one way in which a proof can be robust is if it is "locally correctable", and one very good way for a proof to be locally correctable is if it is modular -- that is, made up of lots of small pieces that can be put together. 10/
Talking of modularity of proofs, I recently had my attention drawn to this article, which I am intending to read soon. 11/
A word of caution though. I once found a nicely modular but wrong proof of Szemerédi's theorem. The mistake was nicely confined to one lemma, but unfortunately that lemma was false, as was the statement I needed it for, as was the statement I needed *that* for. 12/
Even then the modularity was useful though, as it made it very clear what still needed to be done, and a couple of years down the line I eventually got there -- though with a much less simple argument. 13/
So why is so much mathematics robust? The best I can do here is to say that a huge process of selection goes on. The fact that some funny graph happens to contain a Hamilton cycle that can only be found after a huge brute-force search is something we tend to dislike, 14/
saying that there aren't any interesting ideas. So I think that robustness, modularity, ideas, understanding, etc. are all parts of that tiny fraction of statements that we find interesting and are capable of finding proofs of. It's a bit like the anthropic principle 15/
but less dubious: we can actually see that the mathematics we like sits inside a vast "multiverse" of mathematical statements that we don't like. 16/16
• • •
Missing some Tweet in this thread? You can try to
force a refresh
An exciting result has just appeared on arXiv, concerning the following simple-seeming problem: if A is a set of n positive integers, then how large a sum-free subset B must it contain? That means that if x, y and z belong to B, then x + y should not equal z. 🧵
A beautiful argument of Erdős shows that you can get n/3. To do so, you observe that if x + y = z, then rx + ry = rz modulo m for any positive integers r and m. So you pick some large prime p and a random r between 1 and p-1, and you note that on average ...
for a third of the elements x of A we have that rx lies between p/3 and 2p/3 mod p. Taking B to be the set of all such x from A gives us a sum-free subset, and its average size is n/3, so it must at least sometimes have that size.
Occasionally in mathematics a statement that just has to be true turns out to be false. A paper appeared on arXiv today that disproves a well-known conjecture in probability called the bunkbed conjecture. 🧵
Here's what it says. You start with a connected graph G, which you think of as the bottom "bunk". You then take a copy G' of G, which is the top bunk. For each vertex x in G, let x' be the corresponding vertex of G'.
Obviously the top bunk can't just lie on top of the bottom bunk or it would be impossible to sleep in the latter, so we now choose some of the vertices x of G and join them to the corresponding vertices x' of G'. We call these connecting edges posts.
I've just tried out a maths problem on ChatGPT4o on which it failed, with a failure mode interestingly similar to its well-known failure on problems such as "I have a goat and a sheep and a large boat. How many crossings do I need to get them both across the river?" 1/5
I asked it, "I have a 7x7 grid with one corner removed. Can I tile it with 3x1 rectangles?" It is not hard to see that the answer is yes. But ChatGPT4o told me no, since there will be an unequal number of black and white squares. 2/5
First I pointed out that that didn't matter, since a 3x1 rectangle does not cover an equal number of black and white squares. It conceded that point but still claimed (with some bogus reasoning) that it was impossible to create the necessary imbalance. 3/5
Google DeepMind have produced a program that in a certain sense has achieved a silver-medal peformance at this year's International Mathematical Olympiad. 🧵
It did this by solving four of the six problems completely, which got it 28 points out of a possible total of 42. I'm not quite sure, but I think that put it ahead of all but around 60 competitors.
However, that statement needs a bit of qualifying.
The main qualification is that the program needed a lot longer than the human competitors -- for some of the problems over 60 hours -- and of course much faster processing speed than the poor old human brain.
I'm very sad to hear that Daniel Dennett has died. I greatly enjoyed his books Consciousness Explained and Elbow Room, and I hope I won't annoy too many people if I express my opinion that what he said in those books was basically right. 1/
For instance, I agree with him that computers could in principle be conscious (but would be very cautious about making such a claim of an actual computer), and also that free will can be reasonably defined in a way that makes it entirely compatible with determinism. 2/
I briefly met him once, after lunch at Trinity where he had been a guest of Simon Blackburn. IIRC he had been giving a talk on why religion exists: he politely listened to my suggestion that confusing correlation with causation might have had a lot to do with it. 3/
Today I start my seventh decade, so here are a few reflections on what it's like to reach that milestone. 🧵
1. I've had an extremely fortunate life so far. Of course, nobody reaches the age of 60 without some bad things happening, but I've had a lot less bad than my fair share and a lot more good.
2. With each passing decade I get that much more aware of the finiteness of my life, but turning 60 is a big step up in that respect from turning 50. I have people basically my age talking about retirement, for example.