1/ It's high time for @optimismPBC vs @arbitrum. I know you've been waiting for it so let's go!π₯π₯π₯
Long π§΅
2/ Let me start with similarities, they both:
β’ are rollups ie. real L2s and they store all txs on L1
β’ are optimistic meaning they use fraud proofs
3/
β’ use sequencers for instant "finality"
β’ have generic cross-chain messaging allowing creation of advanced token bridges like @MakerDAO's fast withdrawal bridge: forum.makerdao.com/t/announcing-tβ¦
4/ Now, the fun part - differences. The biggest distinction is what happens when two parties disagree on the state after executing a tx ie. implementation of the fraud proof (FP) mechanism.
5/ Optimism uses single round fraud proofs. This means that L1 executes the whole L2 transaction on-chain to verify the state root. This makes FPs instant which is nice.
6/ But there are some problems too:
β’ you need to supervise tx execution hence need for OVM (aka rewriting EVM to avoid sideeffects)
β’ L2 tx gas is bound by L1 block gas limit
β’ you need on-chain state roots after each TX - costs more :(
β’ source of potential security issues
7/ Arbitrum features multi-round fraud proofs. You can dumb it down to doing a binary search between two parties to find the first opcode of a whole block that they disagree on. Once found only this particular opcode is executed on-chain.
8/ It has some nice properties:
β’ it requires posting on-chain just one state proof for a whole bunch of txs,
β’ L1 block gas limit doesn't matter since L2 txs will never entirely execute on L1
9/ Drawbacks:
β’ It requires EVM -> AVM translation (thankfully it's automatic)
β’ it's slow - in the worst case it takes up to 2 weeks to finish FP. Realistically it's 1 week.
β’ requires original claimer to be online and cooperative
10/ Another way of thinking about this is that Optimism does containerization and Arbitrum virtualization.
11/ Optimism's approach has one *HUGE* drawback. Imagine that there is a hardfork and Ethereum consensus rules change. One of the opcodes is removed/repriced or modified in some other way.
12/ Suddenly re-executing past tx on L1 will result in a different final state π¨ I am not sure how Optimism team is going to solve this but I am sure they will figure out something when the time comes. Arbitrum fully controls AVM specs and doesn't have this problem.
13/ Both projects try to stick as close as possible to the ethereum ecosystem but there are some differences here too. Generally speaking, you can still use EVM-related tooling that you know (solidity, hardhat, waffle etc.) BUT it's not that simple.
14/ Optimism requires a special solidity compiler to generate OVM bytecode. So, unfortunately, it works only with Solidity and only with particular versions of solidity. On the other hand, their L2 node is just modified geth which is great for compatibility.
15/ Arbitrum on the surface is fully compatible with EVM/JSON RPC spec but their node is a custom implementation. It does automatic EVMβ AVM transpilation to support fraud proofs. Thanks to this low-level translation, it supports any EVM language (vyper, YUL+ etc).
16/ Optimism uses weth but Arbtirum has native eth support. Optimism launches with wallet abstraction built in too.
17/ Arbitrum launches with a unified permissionless bridge to bridge any tokens to L2 (it deploys generic ERC20 as a L2 counterpart). Optimism prefers dedicated bridges but of course, deploying "unibridge" on optimism is possible as well. @dmihal knows more about this ;)
18/ The last difference is a launch date. Arbitrum launches "mainnet for developers" at the end of the month. For Optimism, we will have to wait until July.
20/ Personally I am cheering for both projects and I can't wait for them to arrive on the mainnet. The whole ethereum community is in desperate need of a proper L2 solution, not some smoke and mirrors scalable sidechain (ekhm).
21/ Oof, this was my longest tweetstorm ever. Let me know what do you think. If you want to check out Optimism bridge example take a look at: github.com/BellwoodStudioβ¦ We will have Arbitrum compatible version ready next week π
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
Just this week, two highly anticipated universal zk-EVMs launched: @zksync Era and @0xPolygon zkEVM! Both use validity proofs for execution correctness, but how do they differ?
Let's dive into state diffs vs TX data! ππ
π °οΈ Polygon zkEVM pushes L2 Tx data to L1 (like optimistic rollups). From, to, value β it's all there! Check out this batch submitted by a sequencer: etherscan.io/tx/0x19e45aaefβ¦ You can even grep for L2 addresses!
π ±οΈ ZkSync Era, on the other hand, stores only state diffs on chain - the "effect" on the state after executing L2 Txs: etherscan.io/tx/0x91550b539β¦ Here's what's encoded in the calldata.
So, how do these approaches impact system properties in practice? π€
I synchronized Arbitrum and Optimism rollups full nodes.
Optimism was done after only 2.5 days but arbitrum took whole 3 weeks to synchronize.
Whatβs going on here? π§΅
First of all β YES, Arbitrum network in more popular. It has about 35% more txs, a bigger state and so on. However, that doesnβt explain why it takes *~10x* longer to sync. /2
The real answer lies within Arbitrumβs node implementation details. Right now itβs custom work with components written in go, c++, and a custom language called Mini. Furthermore, it uses AVM (Arbitrumβs Virtual Machine) to emulate EVM. /3
How I attempted to break @fuellabs_ v1, a short story about the importance of running validators for optimistic rollups.
Let's start from the beginning: π§΅π
1/ Fuel v1 is a (very first!) optimistic rollup running on top of Ethereum. It uses the UTXO model to make tx execution parallel. But for sake of this discussion, it is interesting because of a few other reasons:
2/ β’ Itβs fully decentralized: Working fraud proofs, no admin keys, no upgrade mechanism etc.
β’ Itβs a ghost town. TVL according to L2beat is below $10 and the last transaction happened more than a year ago.
I am stoked to finally talk about DAI Wormhole. πͺ±π³ will allow users to teleport DAI between L2s while being fast, cheap, and secure at the same time
Our mission is to make DAI first, truly cross-chain stablecoin.
2/ Okay, so how does it work? First, a user burns DAI on domain A. Then by providing an oracle attestation of the burn, they can mint fresh DAI on domain B. And... that's it!
3/ Under the hood, MakerDAO keepers will ensure that DAI will be really moved between domains and that debt will be settled. The twist is that a single L1 settlement can finalize hundreds of wormholes between L2s in constant time. That's how we get scalability.
The funny thing is that the best talks last week in Paris didn't happen during @EthCC but a Geth workshop organized by @optimismPBC.
Short π§΅:
It was organized in an amazing, stylish venue and at the beginning, there were only ~15 ppl including galaxy brains like @VitalikButerin, @karl_dot_tech, and @ben_chain. The quality of discussions during lunch was ridiculous π
@kelvinfichter did a talk about understanding geth internals by diving into JSON's RPC call implementation.
What's crazy is that we even skimmed through the code that was responsible for the consensus bug in EIP-1559 later that day on the Ropsten. Ooops...
I just saw A LOT of excitement about fast confirmations on newly deployed @Uniswap on @optimismPBC , but how does it really work, and can users trust them? Finally, isn't a single sequencer that provides such confirmations a threat to decentralization? Let's talk it throughπππ
2/ First, things first β sequencers are privileged actors in many rollup systems (@optimismPBC , @arbitrum, @StarkWareLtd , @zksync ). They receive transactions from users, order them and submit them in batches to L1.
3/ The main reason why they exist is the simplicity and efficiency of having a single coordinator. At this stage, there is usually a single sequencer per rollup, and it's run by the creators of a rollup.