Jonathan Gorard Profile picture
Applied mathematician, computational physicist @Princeton Previously @Cambridge_Uni Making the universe computable.

Mar 14, 12 tweets

I think one of the conclusions we should draw from the tremendous success of LLMs is how much of human knowledge and society exists at very low levels of Kolmogorov complexity.

We are entering an era where the minimal representation of a human cultural artifact... (1/12)

...will be, generically, an LLM prompt. And those prompts will be, generically, orders of magnitude more compact than the artifacts themselves. The great success of coding agents, for instance, indicates that the source code of most software artifacts is orders of... (2/12)

...magnitude more bloated than the truly minimal algorithmic representation required to specify that software artifact unambiguously. Likewise for much of human writing, research, communication. By being such efficient decompressors of algorithmic information, LLMs have... (3/12)

...betrayed the horrifying extent of our own verbosity. Part of that verbosity doubtless arises from the limitations of our formal representation languages (such as programming languages). But part of it also seems inherent, likely as a means of human error-correction. (4/12)

When the intended decompressor is very lossy (like a human mind), overspecifying the representation with lots of synonyms and syntactic sugar seems prudent. When the intended decompressor is closer to perfectly lossless (as LLMs are rapidly becoming), it makes less sense. (5/12)

Mathematics and physics represent interesting test cases. The process of axiomatization in mathematics is a form of algorithmic compression: all the true theorems are always "contained" in the representation of the axioms and the rules of inference, but the process of... (6/12)

...decompressing this representation can be arbitrarily difficult. Yet the details of how the decompression (theorem-proving) and compression (reverse mathematics) processes happen are, in some sense, the true objects of mathematical interest. Likewise with physics. (7/12)

One might, if one were sufficiently naive, claim that physics is about finding minimal algorithmic compressions of the physical universe. Yet again, the details of (de)compression are ultimately what matter. Merely finding a minimal representation of the universe... (8/12)

...wouldn't "solve physics", any more than discovering the ZFC axioms "solved mathematics". [If one believes, as I do, that the universe can ultimately be modeled in computational terms, then in a sense this representation already exists: it's a universal Turing machine.] (9/12)

LLMs are remarkably effective decompressors of algorithmic information, and their success in theorem-proving and software development is a testament to that. Their capabilities in compression currently seem less clear. Yet discovering minimal representations,... (10/12)

...be they witty aphorisms or bon mots (at which present LLMs are uniformly awful), or the compressed axiomatic representations that characterize mathematical beauty (at which present LLMs are largely untested), constitutes one of the hallmarks of deep human intelligence. (11/12)

So I think it's becoming increasingly clear that efficiency and losslessness, across both compression and decompression, together represent four potential axes along which we can begin to parameterize the space of possible (intelligent) minds.

But what are the others? (12/12)

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling