How we got to "minPoolFee": The ITN was a completely free-market economy where minPoolFee and operatorMargin could both be set to 0. During this time, the game theory worked correctly, delegator stake moved en masse to the pools charging the lowest possible fees...
During HTN the concern was raised r.e. the Sybil Attack Vector that a well-funded actor who had outside income sources could afford to run (multiple) pools at "Zero" cost to delegators to attract all their stake. Since entrenched delegators tend to stick around...
This could present a problem either via that bad actor flipping the switch and charging an enormously high rate (not against the code is law, up to the delegators to monitor and move) or drive the rest of the "honest" stake pools out of business and thereby attack the network...
So... the "minimum pool cost" that IS in the IOG research (although maybe not sufficiently tested/modeled) which is operators actually charging what it actually costs to run their stake pool was floated as a solution... To arrive at 340A IOG polled the existing SPO community...
And arrived at rough average costs of I believe ~$210 USD per month to run a stake pool ($ADA waas ~$0.1USD at the time so 340 * 6 = 2040 * 0.1 = $204 USD/month). So, the rate (at the time) was set by the SPOs (at the time)... Obviously... there have been some changes since then
This is why I am in favor of modifying the minPoolFee but not blanket removing it from the system (IMO a knee-jerk reaction) as the Sybil Attack vector is still in play. If anything, we should campaign for a new vote of all SPOs (using the pool keys to prove one pool one vote)...
To arrive at a "minimum cost to operate" average for all pools across the network. The numbers can then be tweaked/modified per the price of $ADA at any given point in time based on consensus of the impacted community (SPOs, not us armchair quarterbacks)...
This is why I support people and projections like @TobiasFancee and @DrLiesenfelt who believe that the current scheme is flawed, but are actually working to study, analyze, and rationally approach the topic rather than taking quotes and research out of context...
And attempting to make wild and wholesale changes to the underlying parameters w/o consideration for the knock-on effects that this may have months or years down the road. The parameter change itself is easy, the outcome/results of the change are harder to predict.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Damn you @tophinity... here we go... 🧵(1/5)
[Verse 1]
You say that we've got nothing in common
No common ground to start from
And we're falling apart
You'll say the market's come between us
Our bags have come between us
Still I know you just don't hold
(2/5) [Chorus]
And I said "What about, exit liquidity?"
She said, "I think I remember those flips
And as I recall, I think, we both kinda profited"
And I said, "Well, that's one thing we've got"
(3/5) [Verse 2]
I see you - the only one who knew me
And now you right clicked my PFP
I guess I was wrong
So what now? It's plain to see we're over
And I hate when things are over
When so much is left undone
🧵 (1/n) It's important to point out today that the #Cardano#Testnet is **catastrophically** broken due to a bug in Cardano Node v 1.35.2. This was the version that we had previously been told was "Tested and Ready" for the Vasil Hardfork. This bug was only discovered...
(2/n) because operators rushed to upgrade on #Mainnet and it was creating incompatible forks and causing a decrease to chain density. #Kudos to @ATADA_Stakepool and @PooltoolI for doing the research of on-chain data to identify 1.35.2 v. 1.34.1 as the culprit...
(3/n) however, Testnet is still broken, due to the majority of operators having upgraded to 1.35.2 on testnet to simulate a Vasil HFC event there, 1.35.3 is now incompatible and incapable of syncing the chain, legacy installations are still on one fork or another...
There seems to be some misunderstanding of what happened to the #Cardano network yesterday so I'd like to provide some clarity if possible. The @YummiUniverse drop happened and went off slowly but without incident (even for the person who sent us 25k WMT).
We opted for the "Wallet Drop" method knowing that demand would be high and the queuing solutions of other drops is most often what causes the "server crash". We were right...
From 1330 - 1500 UTC on 09/29/2021 the website received 147.48K requests, transferring 36.36GB of data to 33.52K visits from 101 different countries from across the globe. 1.06K requests were blocked by our rate limiting. Thanks to load balancing and Cloudflare, this was fine...