Crypto keeps discussing Scalability but often seems to miss the point. One proof: People compare TPS between Bitcoin, Ethereum, and Solana.
Please don't do it!!!
So, let try to define Scalability better:
Here is my definition: 1/11
Scalability = More transactions verified on the worst expected machine in the network.
So what is the hardware we are talking about?
- For Bitcoin, it is a Raspberry Pi
- For Ethereum, it is a $40/month machine
- For Solana, it is a $3k/month on AWS 2/11
Now, that we defined the hardware, what are the constraints limiting numbers of txs?
- Ratio chain growth/sync time
- Worst-case verification time of a block 3/11
If you simultaneously improve sync time and worst-case verification, you can just add more transactions without risk for the network.
How does one achieve scalability then?
Usually, the answer is
- More optimized software (Bitcoin Core optimizes a lot) 4/11
- More advanced data structure and execution layer (Erigon improving sync time; Solana requiring strict access list)
But this is only one form of scalability, one I call "Sequencer Scalability". 5/11
Notice "Sequencer Scalability" is a moving target, based on software optimizations.
ZKPs enable another form of scalability, one I call "Full Node Scalability". 6/11
We can finally separate at the hardware level between the core (aka miners) and the edges of the network (aka your phone)
That's the power of ZkRollups. We can increase hardware requirements for sequencers without hurting decentralization. 7/11
So in terms of L2 dichotomy, how can one split scalability?
- Arbitrum One (just reuse Geth): not Scaling
- Arbitrum Nitro (eWASM), Fuel (Parallelism): targeting Sequencer Scalability 8/11
- #StarkNet: natively Full Node Scalability and targeting Sequencer Scalability (using a mix of recursive STARKs and the usual techniques used by others) 9/11
To conclude, only ZKPs provides Full Node Scalability, and going from Full Node Scalability to Sequencer Scalability scalability is infinitely easier than the reverse.
This is why I have 0 doubt that ZkRollups are the silver bullet of scaling.
So, let's the game begin... 10/11
As promised, a thread on data availability, or: How I Learned to Stop Worrying and Love Ethereum
Let start with a definition. We call the State, the current set of elements stored on the chain. It can be UTXO, ETH balance in an account, or active storage of a contract. 1/7
On StarkNet, the state is compressed using a Merkle Root and every state transition must update the state Merkle root on Ethereum, making it final.
Therefore, to continue pushing the chain forward, users must know all the current elements to recompute the Merkle tree. 2/7
This is called the Data Availability Problem.
StarkNet solves it by being a ZkRollup, publishing for each modified storage the last value on-chain. Think of this data broadcast as a snapshot of the chain done every once in a while. 3/7
I received quite significant questions about the advantages zkRollups bring compared to regular smart contracts writing.
Here an attempt at exploring such differences. 1/9
ZK-Rollups are blockchain (technically, they are commitchain @stonecoldpat0) which leads to the same dichotomy as any smart contract blockchain. We have:
- L2 calldata: data send within a tx by a user to the L2
- L2 computation: computation done in the L2 2/9
- L2 storage read: storage slot consumed by a tx on the L2
- L2 storage write: storage slot modified by a tx on the L2
Then, what zkROs change from the programming standpoint? Quite a lot actually. 3/9
Because this is how a weekend should look like, here a random thread on StarkNet architecture:
StarkNet is the result of our work on Cairo. Cairo is a language/VM optimized for ZKPs.
Cairo as a VM has the structure of CPU (CPU-AIR --> Cairo). 1/8
As a consequence, it can run with a single circuit (called AIR in the context of Starks) and prove multiple programs within the same proof.
Since we have a CPU, we can now program an Operating System. This is why the official name for StarkNet is StarkNet OS. 2/8
The StarkNet OS is responsible for every core protocol level program.
- State Management
- L1 <> L2 communication
- Contract Execution
- Cross contract calls
- Calling Cairo to run each program
- Tx structure
- Others... 3/8
An interesting fact, little known in the community:
Roughly, half of ERC20 in the market are implementing the standard called ERC-20 Snapshot (docs.openzeppelin.com/contracts/3.x/…). 1/9
You may not believe me but @Uniswap Uni is using it, @AaveAave stAave is using it and all tokens used in Snapshot right now are implementing this standard.
Here is Uni implementation 2/9
It might sound surprising but this implementation has an annoying side effect. It adds between 20k to 100k gas per transfer. After 3 069 472 txs on UNI ERC-20, at an average of 50gwei (at current eth price), this feature costs the community a minimum of 0.3$ per transfer
3/9
1/n We just publish the new extension of dAMM on which I have been working for some time with @Brechtpd from Loopring. I believe it could be a groundbreaking design in the upcoming multi L2s.
A quick thread 🧵:
@Brechtpd 2/n For those who follow @StarkWareLtd work, we publish a few months back the first version of dAMM: an L2 AMM design where the restrictions are enforced on L1, simplifying greatly the AMM design of a constrained system like StarkEx.
3/n Not only it provides a simpler design, but it also provides additional advantages like the ability to lend the pool asset using an Asset Manager as proposed by BalancerV2.