I've been v.quiet lately as I'm laser focused on #cassie cleanup, improvements, testing etc but it's going well.
Starting to get some tangible results that are interesting to the #crypto crowd so thought I'd share in a thread ⚠️
Some background first.
The #cassie code base before starting this work was a real bloody mess!
My focus was to find a solutoon/implementation that worked, not to write clean, pretty, fast code.
Hacking around with ideas & progressive theory produces terrible code.
So much so that even though I'd proven out the theory to mine (and others) satisfaction, running long tests was prone to crashing because some no longer used code was being called somewhere deep in #cassie belly under some edge condition. V. Frustrating!
Most of the work I've been doing hasn't been in the domain of optimization, even though it usually does produce performance improvements.
Namely things like improving exception handling, argument checks where they were missing, better use of threads, pipelining improvements...
... On producers / consumers, improvements to concurrency handling using better suited primitives or different locking styles, or even removing redundant locks where possible, refactoring the many huge hunks of monolithic code and of course...
Deleting code that is no longer needed but frequently still be called for all manner of stuff!
Deleted code has zero bugs, it's my favorite!
Anyway still a ways to go but moving very nicely in the right direction.
Before I started I did a baseline soak test...
Small network, to keep things simple, 8 shard groups, 4 validators in each with a spec of 4 core 8 gb + ssd injecting 10M tweets as fast as possible with 20% failure rate. Failures are expensive!
At each validator #tps averaged 170, finality 38s, bandwidth use ~8MB/s
Same test but today, #tps is up to 300 average, finality ~31s, bandwidth ~4MB/s
Oh and it crashes MUCH less haha 😎👍
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Purpose of the test is to ensure that #cassie can support at least validator set sizes of 100.
100 validators provides an acceptable level of security, decentralization etc ... more is always better OFC.
As #cassie is the first leaderless multi-decree #BFT it's a critical test! Until now it's simply been theory and a hunch that this would work for over 18 months.
To make things "worse", the test itself is configured to be HORRIFIC, with parameters WAY outside nominal.
There's a lot of questions about #cassie, what is the significance, why is it cool, what does it demonstrate, how is it #web3. Even moreso since the #radflix demo dropped & the exposure it got on various socials etc so here is a thread...
Hopefully yall know by now that it is primarily a research network. It started as a collection of radical ideas around #consensus, #sharding, #blockchain and #cassie is the embodiment of those ideas to demonstrate viability. But it has now become so much more...
It is technology that can do things that were said to be impossible. Technology that can do things other L1 tech hasn't even dreamt of trying to do. It is a technology which is being used to show proof of potential on all the *hardest* things first...
As far as I understand, these solutions are essentially two separate consensus mechanisms.
Probabilistic to perform state transitions, with deterministic to finalise the best version of the former.
It works & does the job required, so why is #cassandra different?
....
Think of a hybrid car. There is a petrol engine & electric motor working together to move the car.
They are 2 separate systems & are open loop, one way. The engine if needed can generate electricity for the motor, but the motor doesn't produce anything useful for the engine...