1/15
The scalability of a #zkRollup IS NOT limited by the prove.
I'm detecting some missunderstandings of how a zkRollup works.
Let me explain in this thread why the prover is not the limiting factor and what are the actual limitetions of zk ( and optimistic ) rollups.
2/15
The first step is keeping the network synchronised. This is not specific to a zkRollup, this is the same for any chain (L1, L2, .., L43, zk, optimistic, side-chain, etc)
3/15
Once you have one (or many) nodes syncronized, you need to build the proofs for all those batches ( we call a batch to an L2 block to distinguish it from the L1 blocks).
4/15
To build the proof, you need to reexecute the batch and build the zkProof. This may take some time and resources. In the case of polygon #zkEVM it's 2' for a 10M gas batch in a simple machine (We internally call this reference "The rocket" and it has 128 cores and 512Gb RAM)
5/15
But this process can be run in parallel!. That mean that you can have one server computing the prover for the batch one, a 2nd server for the batch 2, a 3rd server for the batch3 and so on.
6/15
If the network has a high demand and many batches are generated, you will need to spin up many provers to catch up, but if the load of the network decreases, you can turn off some of those servers.
7/15
Once you have all the batch proofs of a continuous sequence (a chain segment) you can aggregate them. That means that you can for example compute a proof that proves the batch 1 proof and batch 2 proof. You can do the same for batch 3 proofs and batch 4 proof.
8/15
So you can build a tree of proofs where the root proves a full chain segment. You can build this tree with the shape you want and in parallel. You can have one server aggregating proof 1 and 2 while another is aggregating proofs 3 and 4. This proof takes 10s in "The Rocket"
9/15
The last step is to send this aggregated proof onchain. This proof is what stores the rollup state onchain and allows the user to withdraw the funds. This happends once in a while (30 minutes in the case of the @0xPolygon#zkEVM).
10/15
For this last step the proof is converted from a STARK to a SNARK (FFLONK), this reduces the gas cost for verifying this proof onchain. This process takes about 2min in "The Rocket" and the gas cost of the whole TX is about 350K no matter how long is the proving segment.
11/15
The parameter that matters here is the proving cost, as this affect the TX fee. But this cost is becoming insignificant compared to other costs, like the data availability cost, the cost of the L1 txs, the maintenance cost or even the capital cost of the optimistic rollups.
12/15
So NO, the scalability of the zkRollups is not limited by the provers at all!
What are then the scalability limitations of a zkRolloup?
13/15
Well they are the same that any other rollup. The next bottle neck is data availability. This is why is so important pushing #Ethereum2#danksharding and the #EIP4844
14/15
Once solved the data availability, next bottleneck will be keeping the chain synchronised (nothing to do with the zkProofs). And is at this point where thinking in a multi-rollup ecosystem running in parallel will make a lot of sense (multiprocessor approach).