Truth and provability, a thread. Let us compare Tarski's theorem on the nondefinability of truth with Gödel's incompleteness theorem. Smullyan advanced the view that much of the fascination with Gödel's theorem should be better directed toward Tarski's theorem.
Both theorems begin with the profound arithmetization idea, Gödel's realization that arithmetic interprets essentially any finite combinatorial process, including arithmetic syntax itself. Every arithmetic assertion φ is thus represented in arithmetic by Gödel code 'φ', a number.
Gödel's theorem (the first incompleteness theorem) is the claim that no computability axiomatizable true arithmetic theory can prove all arithmetic truths. Every such theory will admit true but unprovable assertions.
Tarski's theorem, in contrast, asserts that arithmetic truth is not arithmetically expressible. That is, there is no arithmetic predicate T(x) holding exactly of the codes of true assertions. There is no such expression T for which φ⇔T('φ') holds of every arithmetic assertion φ.
Tarski's theorem immediately implies Gödel's incompleteness theorem, since provability in a theory is arithmetically expressible. By Tarski's theorem, provability cannot align with truth, since then truth would be expressible. So there must be true unprovable statements.
Tarski's theorem admits proof similar to the fixed-point proof of Gödel's theorem, namely, if truth were expressible by T(x), then we could find a fixed point of nontruth λ⇔¬T('λ'), the Liar sentence λ, which asserts its own untruth, directly contradicting Tarski's requirement.
But Tarski's theorem also admits direct proof via Russell. Namely, if truth were expressible, then we could express R(x), asserting that if x is the code of φ, then φ(x̲) is not true, where x̲=1+⋯+1 is the numeral for x. If r='R(x)', then R(r) iff ¬R(r), a contradiction.
This is a direct proof of Tarski's theorem via Russell, bypassing Gödel. Reflection reveals that this is ultimately similar to the Gödel fixed-point proof, I would argue that this argument is simpler than the fixed-point lemma, which is itself derived from Russell.
In this way, Tarski's theorem is easier to prove than Gödel's, in that it requires less work in arithmetization and is also closer to Russell's elementary argument. The main point: we can prove Tarski's theorem straight from Russell, plus Gödel codes for assertions.
Furthermore, Tarski's theorem is stronger than Gödel's, since Gödel rules out only computably axiomatizable theories, but Tarski shows that no arithmetically definable theory captures truth, a far stronger result.
Conclusion: Gödel showed that truth does not align with provability in a given computable system. Tarski showed that truth is not arithmetically expressible, a stronger result, which is also easier to prove.
I wish I had made this point more strongly in my book. Chapter 7 is all about Gödel's theorem and Tarski's theorem and the connection with the Hilbert program, but I wasn't as forceful about this perspective when I wrote the book as I now feel is warranted. #PhilMaths
More succinctly: if truth were arithmetically expressible, then we could express the predicate R(x) asserting that x is an arithmetic formula that does not hold of its own Gödel code. If r is the Gödel code of R(x), then R(r) iff ¬R(r), a contradiction. Just like Russell.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I once saw an incredible lecture in Berkeley by a historian of science, one of the best talks I have ever seen. He started his talk sitting on the desk amongst a huge pile of physics textbooks from history, each in its day "the best textbook of its time."
He began by showing us the text written by a student of Isaac Newton, which explained Newtonian forces and the motion of planets. Nevertheless, it had various mistakes and misunderstandings. He set it aside and moved to the next book, also excellent, but also flawed in some way.
And then the next. The speaker proceeded systematically through the historical procession of physics texts, highlighting how each had improved upon its predecessors, but also explaining in specific detail ways in which each of them was flawed.
Foreshadowing — the algebra of orders. Here is an addition table for some simple finite orders. Did you know that you can add and multiply any two orders? A+B means a copy of A with a copy of B above. Stay tuned for multiplication...
Order relations are often fruitfully conceived as being stratified into a linear hierarchy of levels. The simple order shown here, for example, is naturally stratified by three levels.
What is a "level"? A stratification of an order into levels amounts to a linear preorder relation on the same domain. namely, a linear grading of a partial order ≼ is a preorder relation ≤ respecting the strict relation, that is, for which a≺b implies a<b.
There are often far more linear gradings of a partial order than one might expect. The simple order shown in this post above, for example, admits thirteen distinct gradings!
Let me share some illustrations I've recently created to discuss the topic:
What is a number?
Two sets are equinumerous—they have the
same cardinal size—when they can be placed into a one-to-one correspondence, like the shepherd counting his sheep off on his fingers.
Hume's principle asserts that the numbers of F's is the same as the number of G's just in case these classes are equinumerous.
{ x | Fx } ~ { x | Gx }
The equinumerosity relation thus partitions the collection of all sets into equinumerosity classes of same-size sets.
In his attempts to reduce all mathematics to logic—the logicist program—Frege defined the cardinal numbers simply to be these equivalence classes. For Frege, the number 2 is the class of all two-element sets; the number 3 is the class of all three-element sets, and so on.