If you want to learn how blockchains can scale massively while *increasing* decentralization *and* security, this is among the most important seminars you'll ever watch.

#danksharding
I'll be live-tweeting through the #danksharding seminar. You have 10+ hours to mute me if you'd rather not follow along.
I'm playing it by ear, but this will be most likely a casual live-tweet thread with my ramblings, with some shitposting.

If you want to actually learn how danksharding works, please watch the real thing here:
We're live with Hsiao-Wei kicking things off! Looks like Dankrad uses Ubuntu + Chrome. Missed opportunity: Chromium or Firefox!
On the agenda: a recap of existing data sharding, what's new, and what makes the new danksharding design good for Ethereum

What's old: data sharding was always on the roadmap, meant to accelerate rollups

With rollups 100x more effcient, execution sharding doesn't make sense
Data availability sampling: O(n) data, but O(1) work
Each node only downloads a small chunk
Erasure coding: 50% of the data is sufficient to reconstruct whole data
KZG commitments: proofs for ensuring coding is valid
No fraud proofs - me: think of KZG proofs as the ZKPs of DA
Old design:

Different committees for each data shard

What enables new design:

Proposer-builder separation - proposers are validators; builders are a dedicated role, needs only one honest builder. Designed to "contain" MEV.
With PBS, Ethereum moves to a two-slot system. I believe each slot is 8 seconds, so the full block is 16 seconds. The added complexity of PBS necessitates this increase, though I believe there's also a separate proposal for single-slot PBS
crLists: Proposers can force builders to include transactions with a list of transactions, so builders can't censor transactions

Shoutout @fradamt!
@fradamt How crList could work, still an evolving design space, WIP

How builders can be a very centralized role, but block proposers (validators) can still force transactions to be included - maintaining the current censorship resistance model
2D KZG scheme

Instead of having a single KZG commitment, you have m*k samples for maximum resilience

No fraud proofs required

If 75%+1 samples available: all data is available and reconstructable

(Yes, this is some real 4D gigabrain dank stuff)
Danksharding:

Single committee, builder, committing to multiple data shards
With danksharding: execution and data availability are confirmed in the same block

Me: a unified settlement and data availability layer!
Amazingly, an unavailable block cannot get more than 1/16th attestations

55,000 online validators can guarantee full reconstruction of all data!
DA sampling (implemented incrementally) increases this to 1 in a billion chance of an unavailable block passing!

An incredible step forward over majority assumptions current blockchains work with

Bandwidth required: only 2.5 kB/s
PS: there's still an honest majority for consensus
Many advantages of danksharding, it becomes very easy for rollups. Synchronous calls between zk rollups and L1 are possible, opens up an exciting design space for rollup applications
Increased bribery resistance to 1/32th of validator set, full validator set over one epoch

Full nodes will be able to ensure data availability with only 2.5 kB/s
Challenges:

Buiilders constructing KZG proofs is very expensive, needs 100s of cores. However, GPUs can dramatically improve this, being investigated. Needs 2.5 Gb/s bandwidth.

This is fine because builders are a dedicated honest minority role, and validators can force...
...builders to include transactions with crList
Micah: Danksharding ensures data availability, but who is going to ensures data remains forever?

Dankrad: The Ethereum consensus layer ensures data is published, need not be responsible for permanent data storage. Many solutions possible for long-term storage.
Vitalik: Long-term data availability problem is an easy one because it's a 1-of-N assumption. You just need one copy of all the data in the world. There are many solutions possible.

Note: I commented on this earlier today:
Justin (I think?): 55,000 validators are needed to guarantee full reconstructions, do they need to be online and honest?

Dankrad: Yes, but this is a very pessimistic estimate
Q from arnotheduck (?): What happens if all builders go missing?

Dankrad: Very, very unlikely. Anyone can build blocks with less data. Distributed builders are quite possible - still working on it, join the discussion.
Micah: What mechanisms to determine how many data shards there are?

Dankrad: Limitations are more on the validator side, each validator should reconstruct any incomplete rows/columns. Builder specs are not a concern.

Vitalik: There's a limit to long-term storage, but...
...we're well within the limits. Things will only start getting uncomfortable if we added an order or two of magnitude

Distributed builders mean builder requirements is never a concern

Side-note: As storage costs get cheaper over time, this ceiling can continue increasing
Onto Vitalik's presentation about blob-carrying transactions. This is a precursor to danksharding, which can be implemented sooner.
In this EIP, you require every node to download all blocks, with a target of ~1 MB per block.

This data will be part of the beacon block on the beacon chain.

(Yes, it's a block size increase, deal with it!)
Introduces a new transaction type on the execution chain, but it's just raw blobs of data. Comes with gas, basefee etc. as you would with a regular Type 2 transactions. Will be passed around p2p net in a certain format, <insert details I as a non-developer does not undersand>
Interestingly, the transaction format designed to be future-proof in a quantum world. KZG will be replaced by STARK proofs (IIUC)
Data blobs have their separate EIP-1559 fee market. What this means, my take: gas fees on rollups reset to ~zero, and will start increasing only after the 1 MB target is saturated

This target means 5,000 TPS for the baseline 16b transaction; 15,000 TPS for dYdX-type txs
Bunch of code with annoying look = and ( and : signs, I'll leave this to the actual devs. You can find the details here: notes.ethereum.org/@vbuterin/blob…
Alright, back to the interesting stuff: how will rollups actually use these blobs?

ORs: there's a blob verification precompile implemented to make it easy when there's a fraud proof that requires to access contents of the blob
ZKRs: would provide two commitments, the KZG proof in the blob, and the validity proof as per whatever ZKP system the ZKR uses internally

Commitment proof of equivalence protocol to prove that KZG and ZKR's proofs refer to the same data. Elegant!
Achieves what EIP-4488 accomplishes, but has direct forward compatibility to full Danksharding. Sustained load of this EIP is much lower than EIP-4488.
Discussing DoS/spam risk: see the "mempool issues" section here for the answer: notes.ethereum.org/@vbuterin/blob…
I mistakenly forked this! Continues here:
As mentioned before, this EIP will include its own multidimensional EIP-1559 only for data blobs. Multiple ways to do this, the approach being taken for this EIP is like an AMM curve:

ethresear.ch/t/make-eip-155…
A lot of the groundwork for the EIP is already done. Direct forward compatibility to danksharding: need to implement 2D sampling, data availability sampling, PBS, and proof-of-custody
The complexity is largely on the blob creation side

We're now over time!
Last question: trusted ceremony?
(^That was my question: I hope someone asks it)

Vitalik: PBS is a requirement for danksharding, but not for this EIP
Vitalik (answering my trusted ceremony question above): Requires a trusted setup ceremony to a much lower degree than zk-SNARKs, comfortable with it, just big enough to handle the maximum blobs
Micah: Can this ceremony be reused?

Vitalik: Can be built upon
Dankrad: You can have synchronous calls from ZKRs because all proofs, data, everything can be part of a single transaction, despite the data being published on the data shards and not the execution layer
Vitalik: Data availability sampling is still under discussion, the roadmap slowly increases data capacity, e.g. some nodes start sampling before others
Micah: How do these fit in 4444, 4488?

Vitalik: Hoping 4444 is done ASAP. I guess 4488 too. Can we compromise and do 4466? (lol)

4488 - depends on how fast we can do this EIP. Either this EIP gets done quickly, or have 4488 first

Dankrad: High fees are an emergency
That's a wrap! Feel free to AMA anything here
Encore: Danksharding is a situation because George likes to refer to everything as a situation

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with polynya.eth #danksharding

polynya.eth #danksharding Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @epolynya

Feb 13
Some thoughts:

1) Optimistic rollups can have stateless clients. With designs like danksharding + relevant access opcodes, high-frequency state expiry becomes trivial because state is easily reconstructed from L1. High throughput optimistic rollups are not impossible.
2) ZK rollups can have validity proven nodes ala Mina. I believe StarkNet is working on this?

In both of the above cases - statelessness and validity proofs - you can have high throughput, but a low cost to verify rollup nodes directly.
3) Cost of block production will still be high. But as Vitalik's Endgame article argues, this is pragmatic as long users can verify easily, and we have censorship resistant techniques like crLists that force block producers to include transactions.

vitalik.ca/general/2021/1…
Read 13 tweets
Feb 11
Abstracting all of the blockchain stuff from the user is a critical step to mass adoption. VeVe made it to the top 5 grossing iPhone apps a couple of weeks ago (now at #30) - to their users it's just digital collectibles. Most probably don't know what Immutable X or Ethereum is.
Likewise, Argent is doing stellar work in simplifying UX. You can stake ETH, trade a number of tokens, invest for yield in DeFi protocols etc. Users don't need to know what zkSync, ZigZag, Yearn or Ethereum is. I see Argent (& others) eventually becoming a "dapp OS" for web3.
As the space matures, and rollups enable massive scale, I hope the focus will shift to applications, applications, applications*. Then, interfaces that make it as easy to access these "web3" applications as "web2" apps.

Some more rambling here: polynya.medium.com/the-web3-stack…
Read 4 tweets
Feb 10
Advice for Avalanche validators (and really, all networks with PoS, delegations, no slashing & no PBS): deploy a smart contract, return 100% of your rewards to delegators. This means your delegators earn more from delegating to you (depending on protocol, 2x more).
Earn from MEV instead.

Eventually, you end up with a centralized cabal with the highest economies of scale running in servers where a vast majority of stake is delegated. These have a significant advantage over smaller stakers and can return higher rewards to delegators.
Solutions:

a) socialize MEV with PBS + smoothing, censorship resistance with crLists
b) have decentralized staking protocols like Rocket Pool (or Lido in the future), instead of individual validators running their own smart contracts
c) have conservative block times
Read 5 tweets
Feb 3
IMO, the Steem attack is without doubt the most important event in the history of blockchains, yet most people have forgotten. I won't go into details, but here's what I learned:

1) Token distribution is critical in proof-of-stake networks - attack could have been avoided

1/4
2) All stake in CEXs is compromised and can be used for coordinate attacks
3) Inclusive accountability is also critical - this attack could have been avoided had there been a culture of non-block-producing users running nodes
4) Delegators are apathetic, they won't defend

2/4
5) Stakeholders don't necessarily act in their own best interests
6) Social coordination works, but it comes at a steep cost
7) Newer protocols are more resilient, but a repeat of this is not impossible - particularly for chains with weaker economic security

3/4
Read 4 tweets
Jan 31
All of this is true:

- Mature rollups can 100% inherit security
- Early beta rollups can still share security, but have other risks
- It's pragmatic for large validator sets to converge on a few chains
- Some applications are fine with lower security and small validator sets
Also, it's important to note that "rollups" are a type of construction with wildly different designs and implementations. We have to examine each rollup independently to determine how resilient they are today, and what their future roadmap looks like.
Read 5 tweets
Jan 27
Seeing FUD that "rollups only scale compute". Not true, rollups and DAS scale compute, storage IOPS, storage size and bandwidth - the whole deal.

Compute: this one is established and undisputed. How much depends on the rollup's design, but let's say ~100x vs L1s.

(contd.)
The greatest trick rollups pull off is actually data:

- Converting complex state (SSD) to sequential DATA (HDD).

- Compressing this sequential data heavily.

As an example, the baseline transaction on L1s is ~128 bytes, and complex DeFi transactions can be hundreds of bytes.
Meanwhile, dYdX transactions are only 5.35 bytes, and the baseline is 16 bytes. Furthermore, instead of XXX bytes on expensive SSDs, you only need X or XX bytes on very cheap HDDs. Altogether, this is a 100x to 1,000x boost in data efficiency.

But wait! There's data sharding!
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(