1/ 🧵
How and why "data availability (DA)" became the sexiest topic in #Ethereum land, the #l222 space, and for #rollups
2/ Pre-2021 (roughly speaking), DA for most #blockchains wasn't a concern for 2 reasons: 1) most blockchains did not have enough usage to warrant any concern 2) the monolithic approach meant that each (full) node downloaded the entire block to check for availability no problemo
3/ However, this approach has its limitations/drawbacks, and, thus, new solutions like light clients, rollups (RUs), and the modular approach were implemented. @BanklessHQ@TrustlessState@RyanSAdams
4/ As a reminder, full nodes download and validate every transaction that has ever occurred on the chain since its genesis. Light nodes (typically) only check block headers. This means light clients are “light weight” nodes that require less computing resources than a full node.
5/ This makes them more egalitarian bc they are cheaper/easier to run, further decentralizing the L1.
However, bc light nodes follow what the majority commits to as a valid tx (vs verifying for itself), light nodes must have a way to ensure that valid blocks are being published
6/ Data availability is critical in this regard because as long as all the execution data is made available on the mainnet, the chain does not require every node to execute every transaction in order to validate transactions and reach consensus. @pseudotheos@RyanBerckmans
7/ Because rollups can cryptographically guarantee (via a proof) that the transactions are valid, these transactions can now be executed by just a single node and posted to the L1 where it can be cross-checked by L1 nodes. @GuthL@0xEther@EliBenSasson
8/ All L1 nodes download the rollup’s data but only a certain portion of them execute the transactions/construct the rollup state, thereby reducing overall resource consumption.
9/ Additionally, the data within a batch is highly compressed prior to being submitted to the L1, further decreasing the resource burden. This is how rollups help trustlessly scale a blockchain without requiring an increase in node resources. @prateek_jain321@Swagtimus
10/ However, a RU's #TPS is dependent on the data capacity of their L1 for throughput. The more data capacity on L1, the higher the (theoretical) throughput for RUs. Once an L1 runs out of data capacity for the RU, the limit has been reached and no additional txs can be processed
11/ Therefore, now the limiting factor for a blockchain’s scalability is its data availability.
This is huge! Make more space, get more scalability!
12/ To address this issue, new specialized DA chains have launched/are being built. These chains are built to serve solely as a DA/shared security layer for RUs by maximizing the DA capacity. Examples such as @CelestiaOrg and @0xPolygon Avail only provide high data capacity.
13/ In summary, DA is extremely impt for new #modular chains for 2 reasons:
- adequate DA is required to ensure a RU sequencer’s submissions can be cross-checked & challenged
- DA is now the bottleneck for scalability. Maximizing DA on L1 is critical for a RU's full potential.
14/ *Bonus content* Data Availability Sampling (#DAS)
Currently, RUs need to download an entire block to verify DA. This is resource-intensive.
Therefore, Data Availability Sampling (DAS) has been proposed as a solution. @epolynya@VitalikButerin
15/ DAS is a way to verify availability on L1 without downloading an entire block (merkle roots used to find what/where to sample).
This is more efficient bc it enables nodes to download only a portion of a block of data on the L1 & still have (essentially) the same guarantees.
16/ #Ethereum plans to tackle this through data #sharding after the #MERGE. This means not all validators will download the same transaction data as every node currently does.
17/ With DAS, rather than all network nodes downloading all of the data from every shard, each node only downloads a fraction because there are assurances that a minority of nodes can come together to reconstruct all shard chain blocks if needed. @odin_free@j0hnwang
18/ This is essentially a shared security model across the network shards where any individual shard node can raise a dispute to be resolved by all nodes on-demand, similar to the @NEARProtocol implementation. @jadler0@AlexSkidanov@ilblackdragon
#Sharding will significantly increase the on-chain data capacity & help create room for even more and even cheaper rollups. A rollup’s TPS and fees are no longer restricted by the data capacity of a single shard but, now, in the case of Ethereum, 4-64 shards in the near future.
20/ The #Surge, a step on the Ethereum roadmap, consists of multiple upgrades that are designed to improve RUs like #EIP-4488 or #blob transactions. These will reduce transaction fees by ~5x or more. @HenriLieutaud@litocoen@Darrenlautf
21/ This then leads to #danksharding - a data layer built specifically to accelerate RUs. This integrates DAS, meaning the more decentralized the L1, the more capacity there is for RUs. As bandwidth improves & #ETH decentralizes, capacity will continue to increase to 1M+ TPS.
22/ Fin.
I cover all this plus most rollup implementations, their pros/cons, sidechains, scaling, Merge, sharding, PoW vs PoS, modular vs monolithic architecture, bridges, and more with resources and links here
🧵 on some of the major smart contract chains, their different approaches, and how HOPEFULLY (for the love of all that is holy) we are moving away from simply "X chain is superior because it did XXXX TPS on a closed environment testnet" #Ethereum#terraluna#Solana#AVAX
As @epolynya has alluded to several times, #TPS numbers are almost meaningless now. Especially anything under 100k.
At the risk of having this thrown in my face 5 years from now, TPS is essentially solved.
This is due to many things but some reasons include:
2/ Blockchains like #Bitcoin and #Ethereum strive for maximum #decentralization and #censorship-resistance while remaining totally open and inclusive networks. However, they also want to scale to accommodate billions of users.
3/ As they stand right now, their limited capacity to process transactions at the base layer (~7 and ~20 TPS, respectively) are in direct opposition to achieving that goal.
For years, as I have discussed the value prop for seizure-resistant, non-sovereign💲like #BTC with noobies, I have always had to caveat it with "we don't think about it in the Western world too much bc our money, banking system, and law typically work well"
2/ #Bitcoin's value in the Western world has NEVER been more obvious thanks to the last ~ 2 years.
Reminder, it has ALWAYS been valuable to those less fortunate living under authoritarianism or double-digit inflation (over half the world) newsweek.com/half-world-liv…
3/ Now, Westerners are getting their taste:
- 10%+ inflation for many (screw the official #CPI numbers)
- Bank accounts being inexplicably closed @haydenzadams@PeterMcCormack
- Govt freezing funds and threatening companies #TruckersForFreedom
- ~25% M2 debasement in ~2 years
1/ Previously, I covered #L222 zk-rollup projects Starkware and zkSync but looking back, I should have begun with sidechains.
The who, what, why/why NOT, security guarantees, etc.
So, here's a 🧵 discussing sidechains generally and then a bit on #Polygon
2/ In the context of #Ethereum, sidechains are separate, Ethereum-compatible blockchains. Sidechains can be independent #EVM -compatible blockchains as well as application-specific blockchains catering to #ETH users and use cases like @0xPolygon or @Ronin_Network .
3/ Sidechains design themselves to be EVM-compatible so they can essentially copy and paste their code to easily interoperate with Ethereum and all of its infrastructure including wallets, block explorers, and more.
Another deep-dive thread into one of the projects changing the scaling world 🧵...
2/ First, again, some high-level Pros and Cons
Pros with using Starkware products
- Increased TPS compared to ORs (~9,000+ TPS on Ropsten testnet)
- Faster withdrawals (no challenge period), enabling better capital efficiency and liquidity
- #Validiums (discussed below)
3/ Cons
- Developer UX and porting of dApps from L1 to #L2 is more challenging than OR options
- #Cairo language less popular among developers = less talent pool to build on Starkware
- With Starkware's #Validium option, there's a technical challenge in solving DA problem.
1/ CT giving waaay too much shine to these @bitfinex idiots.
Sure, reality TV is fun (I guess) but instead, I’ll try to send the spotlight to a more deserving group: @zksync@the_matter_labs@gluk64
Thread time 🧵...
2/ To begin, some high-level Pros and Cons of zkSync’s approach to zk-rollups (ZKs)
Pros
- Less data contained in each transaction increases throughput and decreases fees (vs L1)
- No withdrawal periods and faster finality
- Inherent (and cheap) privacy (improvements)**
3/ Cons
- Generalized smart contract support (similar to @StarkWareLabs#StarkNet ) is not live or production-ready
- Initial trusted setup ceremony scares some, introduces trust
- New, less battle-tested cryptography **