Stephen Boyd is a renowned @Stanford researcher known for his oft cited textbook, multitude of INFORMS/IEEE awards, and advising BlackRock on convex analysis for manage trillions of dollars
. @akshaykagarwal just defended his PhD under Boyd and is known for his work on visualizations of embeddings via his open-source package PyMDE (minimal distortion embedding)
He’s also a core developer of cvxpy (quant traders ♥️ him) and previously worked on TensorFlow 2
Unlike our other papers, which assume knowledge of crypto, DeFi, + convex analysis, this book chapter is pedagogical + from first principles
Goal: Quant-y undergrads who know Multivariable Calc and Linear Algebra (with proofs, like Lang) *should* be able to pick up CFMM theory
We also show a few new nifty features of CFMMs
1/ Simplified proofs
a. Round-trip trades always lose (path deficiency)
b. Liquidity Provider (LP) share value ∝ ∇ϕ(R)’R [= ϕ(R) for 1-homogeneous functions; surprisingly simple!]
c. Input + output portfolios disjoint
2/ Explicit formulas for add/remove liquidity
Previous papers assumed reserves were constant
We provide a connection between the trading function gradient and the change to liquidity ▶️ helps improve concentrated liquidity formulas (e.g. @Uniswap V3)
e.g. result below:
3/ Exchange Functions
Our curvature paper only showed properties of liquidity (e.g. curvature at fixed reserve) are too state dependent
We elucidate some properties of changes to liquidity via _exchange functions_ which turn out to be concave/convex (*w/o* metric properties)
Their metric properties, which do depend on a particular parametrization and reserve, are shown to be easily computed numerically
This, again, is very useful for measuring impact to concentrated liquidity (e.g. you can extend by linearity exchange functions to piecewise convex)
Finally, exchange functions generalize the invariant calculation done by @CurveFinance to general CFMM curves
There's a simple Newton iteration (gradient descent) for computing trade size from exch. functions
[Remember when @samczsun found a bug in curve's Newton iteration?]
4/ Expected Utility Portfolio
We provide some LP strategies for different utility functions
If we view an LP’s contribution to a CFMM pool as a portfolio allocation, we explicitly find both linear and Markowitz convex programs for how to optimize LP allocation
These are *easily* solved on a laptop and we numerically show how LP allocations change as a function of risk-aversion (cvxpy code included!)
We hope that a clean presentation of these results can make the field more assessable to folks in theoretical CS, ML, statistics, and other quantitative fields
But what’s next? You’ll have to wait until next month ✌🏾
On-chain DeFi intrinsically embeds collateral reqs — TradFi often misprices collateralized credit risk due to transparency
Pitch to (mid) economists: If crypto works… who fucking needs Basel III vs. a validated state root?
The idea that on-chain finance needs undercollateralized lending has been an analogy searching for a solution
DeFi is most successful when you have assets where *verifiable* guarantees on collateral are paramount to position value
But: undercollateralized == not verifiable
Why is this?
Dumb answer: Being undercollateralized opens the can of worms of the binomial American option model: a protocol has to figure out a stopping time for when you’re “too” undercollateralized and need to exit the market — MEV ruins this!
Are Liquid Restaking Tokens (LRTs) essential to restaking security and even risk mitigation, vs. being a source of systemic risk?
Surprisingly, yes! New paper w/ @malleshpai shows that smart allocation to AVSs is crucial for security against cascading failures
@ether_fi @RenzoProtocol @swellnetworkio @puffer_finance @KelpDAO @Eigenpiexyz_io First: What does this have to do with LRTs? They are the largest allocators + should be 'smart money' (due to economies of scale), with outsized impact on restaking security (see, e.g., )
Our paper builds off of the excellent work of @tim_roughgarden & Naveen that formulates cascade risk in combinatorial terms and argues that overcollateralization is needed for security — but we arrived at this via a circuitous path
Repeated arguments over economic security + issuance are a reminder Proof of Stake's threat model is completely and utterly broken — BFT models assume the worst economic attack is a double spend
Doesn't make sense when:
stablecoin supply + non staked TVL in DeFi > ETH staked
2024: We should analyze principal-agent relationships vs. reducing everything to double spends
Compute max profit for each principal-agent interaction (e.g. DS, oracle manipulation, etc.)
Example: Consider a rollup with a canonical bridge; there are many P-A interactions/attacks with different max profits: 1. DA layer down 💰💰💰💰 2. Sequencer censors execution 💰💰💰 3. Sequencer delays execution 💰💰 4. Sequencer sandwiches you / 'cheap' MEV attacks 💰
Three items are behind a wall and a solver is going to get one of them for you
Do you get a goat or a car, anon?
@malleshpai, @ks_kulk, @theo_diamandis and I show you that if solvers have to do more work to deliver the item to you, they're not going to show up to the auction
There’s been a lot of talk about `intents’. What are they?
Simply put: they're markets for transaction execution where third parties called solvers compete to satisfy user orders (and any constraints those orders come with)
Question: What are the principal-agent problems here?
tl;dr: We find broad conditions for oligopoly in intent markets
What is oligopoly here? 1. Fewer bidders, k, than the maximum possible, n, participate (i.e. k/n → 0 as n → ∞ or k=o(n)) 2. Users get *worse* prices even though the number of (potential) solvers increases!
While @artgobblers isn't exactly my cup of tea, the novel NFT auction mechanism ford is cool from an auction theory perspective— but is it incentive compatible (IC) for both buyers and sellers?
tl;dr: It is *not* IC but can be modified to be IC!
Quick recap: A gradual dutch auction (GDA) is a sequence of n auctions whose initial prices a_1 < ... < a_n are increasing but where the price of an auction decays as a_i * p(t) where p is non-increasing (e.g. p(t) = exp(-t) in the original paper)
Why would you use such an auction? If you have a series of NFT auctions (e.g. an edition, a daily @nounsdao auction, etc.), you want to incentivize users to pick bundles (e.g. any subset of items to buy) without forcing all of the supply on the market (reducing auction revenue)