Shared DA (Data Availability) security unleashes the ability to create innovations at the VM layer. One particularly salient use case is Sovereign rollups which can run almost any VM on top of common DA security layer.
One design pattern which we will see a lot, is simply using the core L1 like Ethereum for censorship resistance, re-org resistance and DA, but no validating bridge (aka settlement). There may or may not be off-chain fraud / validity proofs.
This pattern will let you build arbitrary new VMs on top of a common substrate and full validating nodes of that blockchain will *fully inherit* safety and liveness from Ethereum!
This is WAY better than a new chain building its own trust network, and the new chain is verifiably-secure, while being sovereign. The difference with a rollup is that a rollup inherits full security for the *bridge* whereas this pattern does not.
One limitation of shared DA security is that it still bundles trust to flow only through the specific consensus protocol and the DA layer. It turns out this is enough for some class of modules (like innovating on VMs).
Modules which build new consensus protocols (like Nahrwahl & Tusk in Sui), new DA layers (new codes), secure multi-party systems (like Penumbra), those that require additional hardware (SGX systems like Oasis or GPUs for gaming) cannot leverage shared DA layer for security.
@eigenlayer lets you reuse trust for any distributed system that has verifiable on-chain slashing conditions. #OpenInnovation
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One question to @apolynya and others. What if we abandon history storage as a requirement for rollups. Instead rollups store the latest *state* every few weeks into DataLayr, along with state diffs every blob.
So any node syncing can sync to the recent state stored and then apply the state diffs. In any case in PoS, history beyond the weak subjectivity period is not that useful.
Ethereum hyperscaling primer. Why the best blockchains will have *no tradeoff* between scalability and security? Why is hyperscale Data Availability critical? How does Ethereum get there?
There are four resources in a blockchain setting, for each participating node. (1) Computation, (2) State (memory), (3) Networking, (4) History Storage. Lets assume each node has a small amount of each of the four resources.
An ideal hyperscale blockchain system will let the *total* system performance scale linearly with the number of participating nodes, while ensuring that the system can tolerate half of all the nodes being adversarial.
The three rates of innovation: Autocratic,Democratic and Permissionless.
Writing this after listening to @balajis podcast with @sriramk and @aarthir where he comments on "Exit to World" vs "Exit to Community": After listening to @balajis podcast with @sriramk and @aarthir where he comments on "Exit to World" vs "Exit to Community"
@balajis used "exit to world" to refer to something like permissionless innovation, as opposed to something like democratic innovation - which could happen in "exit to community" when not planned properly.
), others are raising a more fundamental question: why is the Ethereum chain designed to be re-orgable.
A brief summary of the Ethereum PoS protocol: it runs like a longest-chain protocol (more specifically the GHOST protocol) with a finalizing BFT gadget (Casper protocol) that activates every F blocks, thus F (=32 in practice) is the period of finalization