Ran the test for an hour ... total SUCCESS!

Here's a breakdown 🧵

#cassie #web3 #blockchain #crypto
Purpose of the test is to ensure that #cassie can support at least validator set sizes of 100.

100 validators provides an acceptable level of security, decentralization etc ... more is always better OFC.
As #cassie is the first leaderless multi-decree #BFT it's a critical test! Until now it's simply been theory and a hunch that this would work for over 18 months.

To make things "worse", the test itself is configured to be HORRIFIC, with parameters WAY outside nominal.
Proposal generation is governed by a POW variant.

In this test the difficulty of the POW was reduced dramatically, resulting in each validator creating a proposal every second.

Ideally we want 5-10% of validators producing a proposal every few seconds.
You can see in the above console output that the "Proposal pool" is almost 400 because the POW difficulty is reduced.

That's 400 possible progress options!

Each validator has to evaluate each one, determine where the majority are converging, make its own vote.
Once a proposal in the pool has >2f vote power, it is accepted & committed.

In the test, the proposal pool was averaging a little higher than 400 constantly.

Under nominal conditions, I'd expect the size of the pool to be ~10-15 on average.
Under these extreme conditions, finality suffers as you can see, in the image it's averaging 70 seconds compared to ~7-12s under nominal.

However, considering the abuse each validator is enduring dealing with 400 proposals ever couple of seconds, 70 seconds isn't too bad at all.
I'm also running small amounts of spam, ~10 TPS so as not to trigger any execution or verification shortcuts, which adds ~10% to finality.

Even though this test isn't concerned with throughput at all, I wanted to touch all parts of the protocol, hence the spam.
Just to make things a little harder, I went #fullsend with 128 validators even though I only needed to test to 100 for confidence.

Here's a little snip showing 125/128 have acquired some vote power and thus are voting and authentication load on the system.
Finally I wanted to gauge if my authentication complexity is as low as I think it is.

Under the best conditions proposal voting verification complexity can be almost O(1). Under these conditions I'm expecting somewhere O(log n).
A quick check of CPU use shows each machine using ~50% of CPU. Not bad ... until I tell you each machine is running between 8-32 #cassie instances depending on spec! 😎

The one here is a beefy box so has 32!
All in all, a mind-blowing success!

Leaderless multi-decree BFT is now really REALLY a thing! 😎
Now ... I'm sure some🤡will say this isn't a valid test.

The machines running the #cassie instances are on differing continents, open cross connections with each other, perform all signing/verification & consensus, doing so under some extreme conditions!

So whatever ...

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dan "fuserleer.eth" Hughes

Dan

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fuserleer

May 12
⚠️ ALERT BRAINDUMP

Survival of any stable coin, whether its a peg, or a free floating currency really comes down to one problem.

Given a trading pair A/B and some level of volatility, what is the level of volatility at A and at B
If B is intended to be "stable" and you know the volatility metric then you can act accordingly, removing or adding supply via some mechanism.

You might be duped into thinking its easy, if the price now is 5/20 & the price before was 10/20 then A has doubled in value, right?
Wrong, B might have halved in value & A hasn't changed at all.

The price alone, even looking at some history doesn't give reliable signals as to value movements of either assets.

The price at the least is a conflated, obfuscated representation of value changes in both assets.
Read 8 tweets
Feb 21
There's a lot of questions about #cassie, what is the significance, why is it cool, what does it demonstrate, how is it #web3. Even moreso since the #radflix demo dropped & the exposure it got on various socials etc so here is a thread...
Hopefully yall know by now that it is primarily a research network. It started as a collection of radical ideas around #consensus, #sharding, #blockchain and #cassie is the embodiment of those ideas to demonstrate viability. But it has now become so much more...
It is technology that can do things that were said to be impossible. Technology that can do things other L1 tech hasn't even dreamt of trying to do. It is a technology which is being used to show proof of potential on all the *hardest* things first...
Read 10 tweets
Mar 27, 2021
Throughout the day a few folks have correctly noted that there are indeed hybrid consensus platforms such as $dot #symbol #eth2 & others.

However there's a subtle but important difference in #cassandra that perhaps warrants "hybrid" as the incorrect term.

Clarification then...
As far as I understand, these solutions are essentially two separate consensus mechanisms.

Probabilistic to perform state transitions, with deterministic to finalise the best version of the former.

It works & does the job required, so why is #cassandra different?

....
Think of a hybrid car. There is a petrol engine & electric motor working together to move the car.

They are 2 separate systems & are open loop, one way. The engine if needed can generate electricity for the motor, but the motor doesn't produce anything useful for the engine...
Read 15 tweets
Mar 27, 2021
One key area of research I'm doing with #cassandra is testing hybridization of deterministic & probabilistic consensus models.

Successfully doing so would provide an almost "best of both worlds" type model.

...
Probabilistic (#nakamoto) has strong liveness guarantees but is never really safe.

Deterministic (classical) guarantees safety but is difficult to resolve both liveness and safety issues. Usually requiring some #godmode
The #cassandra approach has 2-phases.

The 1st phase is probabilistic promoting liveness to a strong guarantee.

The 2nd phase receives the output of the 1st phase and produces a classical 2f+1 safety threshold.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(