A true gift from Justin Thaler (prof at Georgetown), making it 100x easier for all of us to understand what SNARKs are, the Pareto frontier of known constructions, and the labyrinthine considerations that govern the key bottleneck (proof generation time): a16zcrypto.com/measuring-snar…
Some background and additional context about how SNARKs fit into the current vision for web3 infrastructure 👇
One approach to increasing the throughput of an L1 blockchain like Ethereum is to offload computation to an L2 (perhaps another network, or even a centralized server), with the L2 periodically notifying the L1 about the transactions it has executed and the consequent L2 state.
How can the L1 know that the L2 is not fabricating the alleged new state (e.g., transferring everyone's assets into the account of the L2 operator)?
The vision behind validity rollups (often misleadingly called zk rollups, despite the lack of zero knowledge) is for the L2 to post to L1, alongside a description of the transactions executed by the L2 and the new L2 state, a proof that that state was correctly computed.
The L1 can't simply take the proof on faith, so it needs to verify its correctness. But wait, how can it do that without re-executing all the L2 transactions itself? (This would defeat the purpose of the L2, which is to take some of the load off of the L1.)
Here's the magic: for any computation you might do, it's possible to "show your work" (i.e., generate a proof) in such a way that the proof can be verified (in our case, by the L1) in much less time than it took to do the original computation (done in our case by the L2).
The fact that you can compress arbitrary computations in this way is shocking, IMO. (It's related to one of the deepest results in theoretical computer science, the PCP Theorem. Yet another example of how fundamental theoretical work can shape technology several decades later.)
(The "S" in SNARK stands for "succinct," and refers to this magical "computational compression" property.)
We've known since the 1990s that SNARKs are possible in principle. For the last 5+ years many researchers having been working hard to make them more and more practical.
These days, the most common bottleneck is the amount of computation needed to generate a SNARK proof. (Remember SNARKs are promised to be easy to *verify*, not necessarily easy to generate.)
There are lots of approaches to reducing prover time, including hardware acceleration, recursion, specialized assembly languages, better constructions, handcrafted optimizations, etc. See Justin's article for the full story.
Excited to watch how all the SNARK-related storylines are going to play out--the suspense is killing me!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Lecture 8 of the Foundations of Blockchains lecture series (Longest-Chain Consensus) is a bit of a doozy, so I'm releasing it in two parts. Here's part 1:
tl;dr thread (for part 1) below:
1/14
Longest-chain consensus protocols as in Bitcoin and (the original version of) Ethereum are an important category of blockchain protocols (different from BFT-type protocols like Tendermint).
2/14
Longest-chain consensus starts with a genesis block and consists of a sequence of rounds, where in each round one node acts as the current block proposer.
3/14
2/ Last year I worked with a16z crypto as an advisor—the team is fantastic, their connections to the best projects and people in the space is unparalleled, and I had a ton of fun.
3/ So when @cdixon & @alive_eth asked me to head up a new research lab devoted to advancing the science and technology of blockchain/crypto/web3, I jumped at the opportunity.
Blockchains are (virtual) computers, not databases.👇
The raison d'être of a database is to store data and support efficient queries over that data, not to perform general computations.
The raison d'être of a blockchain with a general (i.e., Turing-complete) contract layer *is* to support general computations (modulo capacity limitations).
Lecture 2 of my Foundations of Blockchains lecture series (The Dolev-Strong Protocol) is now available:
tl;dr thread below:
1/15
Our current goal is to identify assumptions under which there are state machine replication (SMR) protocols that satisfy consistency and liveness.
2/15
Assumption 1: the set of nodes running the protocol is fixed and known up front (“permissioned setting”).
3/15
Lecture 1 of my Foundations of Blockchains lecture series is now available: (Will try to post one new lecture a week for the next 2-3 months.)
tl;dr thread below:
1/12
This lecture series is about the science and technology of blockchain protocols and the applications built on top of them, with an emphasis on fundamental principles rather than specific protocols. 2/12
We’re witnessing a new area of computer science blossom in real time, and future generations will be jealous of your opportunity to get in on the ground floor. 3/12
Video of my talk "A Theory of DeFi?" (as part of the ACM CCS Workshop on DeFi and Security): Summary in a thread:
1/30
It's debatable whether DeFi has matured enough to justify building a detailed theory around it. But suppose it has. What might such a theory look like, and how might it contribute to DeFi's future trajectory?
2/30
Difficult theorems (FLP impossibility of asynchronous consensus, the PCP theorem, etc.) get the most headlines. No difficult theorems yet in DeFi, and not clear if there will be any in the future.
3/30