There is no official roadmap for #Ethereum, as it is predicated more on rough community consensus.
However, a high-level, noncontroversial plan stretching into 2023-24 is generally regarded as the agreed-upon path for the project.
A more technical roadmap with the progress of each step filled in was updated by @VitalikButerin in December 2021.
Specific developments are tied to future upgrades and are always subject to change.
Consider Merge 💯!
The roadmap stages
• The Surge: introducing sharding, ~2023.
• The Verge: Optimise for storage and state size.
• The Purge: Reduce congestion and improve storage.
• The Splurge: Miscellaneous optimizations.
These stages are not sequential & are being worked on concurrently
$ETHs roadmap is constantly evolving; however, the flagship upgrades post-merge are the introduction of shard chains to help the network increase transaction throughput, improvements to rollups, and improving Ethereum’s ability to manage data storage. @TimBeiko
The Surge itself then is focused on improving scalability at its data availability (DA) layer through data #sharding.
Sharding is the partitioning of a database into subsections. Rather than building layers on top (#L222s), sharding horizontally without a hierarchy.
Ethereum will be split into different shards, each one independently processing transactions.
Sharding is often referred to as a L1 scaling solution because it’s implemented at the base-level protocol of Ethereum itself. Whereas rollups are L2s and present less systemic risk
In this sharding model, validators are assigned to specific shards and only process and validate transactions in that shard. In Ethereum's planned sharding model, validators are randomly selected.
Every shard has a (pseudo) randomly-chosen committee of validators that ensures it is (nearly) impossible for an attacker controlling less than ⅓ of all validators to attack a single shard.
(randomness is hard and complicated but here's an image of #RANDAO and #VDF) @dannyryan
This means they are only responsible for processing and validating txs in those specific shards, not the entirety of the network.
The randomness of the validator selection process ensures it’s (nearly) impossible for a nefarious actor to successfully attack the network.
Shards will be divided among nodes so that every individual node is doing less work.
But collectively, all of the necessary work is getting done—and quickler. More than one node will process each individual data unit, but no single node has to process all of the data anymore.
Ethereum developers are looking to implement #Danksharding (named after Ethereum researcher Dankrad Feist), which aims to improve the efficiency and cost of L2 rollups.
This is because the bottleneck for rollup scalability is data availability capacity rather than execution capacity.
This will give L2s more space to store the chain’s data and offer additional data capacity for rollups.
In the danksharding model, shards will serve as data storage “buckets” for new network data storage demands from rollups.
This enables tremendous scalability gains on the rollup execution layer. @JackNiewold
Just as significant, shards will also help avoid putting overly-onerous demand on full nodes, allowing the network to maintain decentralization.
How do we get to sharding, though?
EIP-4488: Rolling out a 100% complete version of danksharding is incredibly complex and will likely take 2-3+ years.
Bc of this, there are intermediary options being discussed, including #EIP4488 and EIP-4844 (proto-danksharding). @apolynya
EIP-4488 is the simplest and quickest way to improve rollups and drive down costs. However, it also has the least amount of attention currently.
So, what is it?
EIP-4488 attempts to reduce rollup costs (while mitigating storage bloat) through two primary factors:
1) Reduce calldata cost from 16 gas/byte to 3 gas/byte 2) A limit of 1 MB per block + extra 300 bytes/tx
This could reduce rollup costs by ~80% (in just a few months!)
EIP-4844 (Proto-danksharding or PDS)
Proto-danksharding (PDS) is an alternative to EIP-4488 but is still a temporary stepping stone to the ultimate goal of “full” danksharding. However, even PDS is quite complex. @dankrad
Rather than rollups using calldata storage (permanently on-chain), under PDS, rollups could post bundles under a new “blob” transaction type (cheaper) and pruned after ~1 month.
Rollup transactions would have their own “channel,” operating through a data blob market that uses its own fee structure & floating gas limits (their own 1559 mechanism!)
This means, even with heightened demand and activity from DeFi or NFTs, data costs won’t go up for rollups
This creates two different gas markets - one for general computation and one specifically for data availability (DA), making the overall economic model more efficient than it was previously.
Data blobs are an entirely new transaction format, and only the blob’s hash can be accessed via a new opcode.
This guarantees the data content will never be accessed by the #EVM, reducing the gas cost of posting the data compared to with calldata. @jon_charb
And that's a wrap for now!
If you want to learn more about what the future for #ETH holds, including the Verge, Purge, Splurge, #PBS, and more, check out the full article! cryptoeq.io/articles/now-w…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Ethereum just moved to #PoS but #Avalanche and its C-Chain have been PoS for ~2 years. So, what's the big deal?
How does $AVAX PoS work?
How does its consensus algo differ from what ETH just implemented?
And can #Avalanche truly have a million+ validators one day??
The Avalanche network doesn’t use just one consensus mechanism but rather a collection of consensus protocols.
What is the Primary Network?
A three-chain (X, P, and C) system that segregates the work done by the overall network.
This enables more efficient use of network resources & the ability to process more txs simultaneously.
Avalanche’s primary network consists of three governing blockchains with diff consensus algos:
Underneath all the songs, pandas, and memes, the #EthereumMerge is bittersweet.
5+ years of waiting, and it's finally here!... Only, it's not like I imagined.
A thread on the ugly/glass-half-empty side of the #Merge from a long-time $ETH bull...
It's going to be impossible to make my argument and not sound whiny or a buzzkill or ungrateful or simply FUDing. That's not my intention. But like with everything, the merge comes with a cost.
However, lemme stress, this is a HUGE accomplishment. Kudos to all the devs involved
Backing up a bit.... blockchains are ONLY worth a damn if they are permissionless, neutral, secure, & censorship-resistant.
That's the truth. If you don't have those, you simply have a corruptible database.
And 99% of the thousands of projects out there don't offer these traits
SNARKs allow someone to prove they have a particular piece of info without actually revealing the contents of the info.
Popularized by @zcash for enabling anonymous txs, zk tech provides scaling efficiencies for the rollup chain that are then submitted to the main chain.
Quick thread on @CryptoEQ Fundamental Ratings as we get TONS of questions around them.... especially in a bull market when XYZ coin is pumping and outpacing #BTC and #ETH
We list ~50 crypto assets but only have a Fundamental rating on ~30. Why so few?
Because that's all that ACTUALLY matters in the #crypto ecosystem.
And, if we're being honest, probably just 15 but we feel a bit obligated to cover the crap/scam coins in the top 30 as warnings
The top ~15 assets make up ~90% of the market cap.
With ~5 of those being #stablecoins and 2 are wrapped assets (#stETH and #WBTC)
So, by and large, we may seem selective but we cover 90%+ of the MC and 99%+ of what is actually legitimate, innovative, or intriguing.
Seems #Ethereum has an affinity for making up words these days! And it all starts with (the normal sounding) #calldata
Let's figure out why, define some of these ridiculous terms, and see how #ETH can get even better
2/ #Rollups (RU) post their compressed L2 batched transactions as calldata onto mainnet Ethereum. But what does that mean and what is calldata? #l222
Calldata (CD) is a specific form of read-only memory data used by smart contracts to call external functions.
Once a RU has batched enough txs, it posts this state transition change in a compressed form to the L1 via CD.
RUs currently utilize L1 CD for data storage, which is limited to ~10KB per block. This is so anyone has the ability to reconstruct the chain & verify the latest state
🧵 on some of the major smart contract chains, their different approaches, and how HOPEFULLY (for the love of all that is holy) we are moving away from simply "X chain is superior because it did XXXX TPS on a closed environment testnet" #Ethereum#terraluna#Solana#AVAX
As @epolynya has alluded to several times, #TPS numbers are almost meaningless now. Especially anything under 100k.
At the risk of having this thrown in my face 5 years from now, TPS is essentially solved.
This is due to many things but some reasons include: