Some thoughts:

1) Optimistic rollups can have stateless clients. With designs like danksharding + relevant access opcodes, high-frequency state expiry becomes trivial because state is easily reconstructed from L1. High throughput optimistic rollups are not impossible.
2) ZK rollups can have validity proven nodes ala Mina. I believe StarkNet is working on this?

In both of the above cases - statelessness and validity proofs - you can have high throughput, but a low cost to verify rollup nodes directly.
3) Cost of block production will still be high. But as Vitalik's Endgame article argues, this is pragmatic as long users can verify easily, and we have censorship resistant techniques like crLists that force block producers to include transactions.

vitalik.ca/general/2021/1…
Further, rollups have a much weaker trust assumption: 1-of-N, so one can argue that higher system requirements for rollup BPs are much safer than equivalent monolithic chains.

4) Finally, you can just run an L1 node. Today, the trade-off is latency, but as rollups mature and...
...build activity, and with danksharding type designs, you can totally have rollups where rollup finality = L1 finality. Interestingly, you can also have rollup finality shorter than L1 finality with recursive validity proofs, but that's for another time.
5) Rollups have a wide-open design space. While some rollups may choose to not implement statelessness or validity proofs, and keep running a rollup node very easy, other rollups won't. For example, dYdX regularly has more computation than Ethereum already.
6) Of course, even with mature ecosystems, there'll still be limits to how far a single rollup's throughput goes. You need to ensure infrastructure is feasible too. My contention, though, is this ceiling can be very high - and much higher than is safe with monolithic chains.
7) Cross-rollup interoperability is much better & safer than cross-L1. With danksharding, two-slot communication between ZK rollups should be fine, and even single-slot transactions might be possible with crLists (TBD on this one). Liquidity sharing is also possible (dAMM).
8) We definitely want the settlement layer to be stateless or ideally validity proven. Ethereum is working on both statelessness and zkEVM - but these will take time. The endgame is one validity proof to rule them all - verifying all rollups.
The main challenge, as I see it, is actually coordinating all of this data and having robust systems that make it easy for stateless and validity proven clients to access all easily. This is a 1-of-N assumption, so I'm not concerned, but it'd be great to have multiple solutions.
Of course, I get that John is working on solving the above problems with Fuel V2 - that's great. But I wanted to question if other rollups are a dead end and are forever limited by state bloat. It just doesn't align with everything I have seen, but hey, I'm just a minor hobbyist!
9) Finally, and perhaps most interestingly: using Durin (which has just been finalized as EIP-3668!) or similar protocols you can have Ethereum light clients be light clients for rollups.

Side-note: I definitely see application-specific rollups making sense for certain applications, particularly as recursive rollups / L3s.

Eventually, I can see composability being solved across all rollups in the same network.

medium.com/starkware/frac…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with polynya.eth #danksharding

polynya.eth #danksharding Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @epolynya

Feb 13
If you want to learn how blockchains can scale massively while *increasing* decentralization *and* security, this is among the most important seminars you'll ever watch.

#danksharding
I'll be live-tweeting through the #danksharding seminar. You have 10+ hours to mute me if you'd rather not follow along.
I'm playing it by ear, but this will be most likely a casual live-tweet thread with my ramblings, with some shitposting.

If you want to actually learn how danksharding works, please watch the real thing here:
Read 46 tweets
Feb 11
Abstracting all of the blockchain stuff from the user is a critical step to mass adoption. VeVe made it to the top 5 grossing iPhone apps a couple of weeks ago (now at #30) - to their users it's just digital collectibles. Most probably don't know what Immutable X or Ethereum is.
Likewise, Argent is doing stellar work in simplifying UX. You can stake ETH, trade a number of tokens, invest for yield in DeFi protocols etc. Users don't need to know what zkSync, ZigZag, Yearn or Ethereum is. I see Argent (& others) eventually becoming a "dapp OS" for web3.
As the space matures, and rollups enable massive scale, I hope the focus will shift to applications, applications, applications*. Then, interfaces that make it as easy to access these "web3" applications as "web2" apps.

Some more rambling here: polynya.medium.com/the-web3-stack…
Read 4 tweets
Feb 10
Advice for Avalanche validators (and really, all networks with PoS, delegations, no slashing & no PBS): deploy a smart contract, return 100% of your rewards to delegators. This means your delegators earn more from delegating to you (depending on protocol, 2x more).
Earn from MEV instead.

Eventually, you end up with a centralized cabal with the highest economies of scale running in servers where a vast majority of stake is delegated. These have a significant advantage over smaller stakers and can return higher rewards to delegators.
Solutions:

a) socialize MEV with PBS + smoothing, censorship resistance with crLists
b) have decentralized staking protocols like Rocket Pool (or Lido in the future), instead of individual validators running their own smart contracts
c) have conservative block times
Read 5 tweets
Feb 3
IMO, the Steem attack is without doubt the most important event in the history of blockchains, yet most people have forgotten. I won't go into details, but here's what I learned:

1) Token distribution is critical in proof-of-stake networks - attack could have been avoided

1/4
2) All stake in CEXs is compromised and can be used for coordinate attacks
3) Inclusive accountability is also critical - this attack could have been avoided had there been a culture of non-block-producing users running nodes
4) Delegators are apathetic, they won't defend

2/4
5) Stakeholders don't necessarily act in their own best interests
6) Social coordination works, but it comes at a steep cost
7) Newer protocols are more resilient, but a repeat of this is not impossible - particularly for chains with weaker economic security

3/4
Read 4 tweets
Jan 31
All of this is true:

- Mature rollups can 100% inherit security
- Early beta rollups can still share security, but have other risks
- It's pragmatic for large validator sets to converge on a few chains
- Some applications are fine with lower security and small validator sets
Also, it's important to note that "rollups" are a type of construction with wildly different designs and implementations. We have to examine each rollup independently to determine how resilient they are today, and what their future roadmap looks like.
Read 5 tweets
Jan 27
Seeing FUD that "rollups only scale compute". Not true, rollups and DAS scale compute, storage IOPS, storage size and bandwidth - the whole deal.

Compute: this one is established and undisputed. How much depends on the rollup's design, but let's say ~100x vs L1s.

(contd.)
The greatest trick rollups pull off is actually data:

- Converting complex state (SSD) to sequential DATA (HDD).

- Compressing this sequential data heavily.

As an example, the baseline transaction on L1s is ~128 bytes, and complex DeFi transactions can be hundreds of bytes.
Meanwhile, dYdX transactions are only 5.35 bytes, and the baseline is 16 bytes. Furthermore, instead of XXX bytes on expensive SSDs, you only need X or XX bytes on very cheap HDDs. Altogether, this is a 100x to 1,000x boost in data efficiency.

But wait! There's data sharding!
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(