@__paleologo Okay, I read it. I'll summarize it and provide some commentary. In brief I think it's a useful and credible paper, with specific empirical results, and I'd explore it for further research. But it's not groundbreaking.
@__paleologo So they start with the standard expected price impact we all know and love. Linear in vol, polynomial in participation rate. Just stage setting.
@__paleologo They define position illiquidity using the expected impact function. Their first claim is that the position illiquidity can be decomposed into position concentration, a measure from Pastor 2020, and (a function of) fund size.
@__paleologo Concentration C is a quadratic function of the portfolio position weight normalized by the position "liquidity weight" in the portfolio universe. Liquidity weight is the position ADV x position vol, divided by the sum of all such ADV x vol in the position universe. okay, fair.
@__paleologo "Fund size" is AUM, but also divided by the sum of all positions' liquidity. They show how this position-level decomp will roll up to fund level. So we have a portfolio decomp into the Pastor concentration and size measures from liquidity, and that gives us portfolio illiquidity.
@__paleologo We briefly come up for air with salient examples: SPY is highly liquid, XNDA is not. XNDA is a tiny biotech ETF, and biotech is notoriously illiquid. So, duh :). A lot of formalism for not much yet but let's continue.
@__paleologo Their next major claim is the self-inflated ETF return from its own flow driven trading is estimable as a function of its fund-level illiquidity J multiplied by its price impact, theta and its relative flow f. Okay.
@__paleologo We arrive at the first bit useful for crowding: that the authors will estimate fund-level price impact caused by ETF flows. This is what carries us to the crowding-related pieces later on.
@__paleologo They proceed to a decomposition of ETF return into uninformative (flow-driven) and informative (fundamental) price movement, conceptually similar to decomposing price return systematic versus specific risk.
@__paleologo They estimate the theta term (impact) in the foregoing using a fixed-effects OLS pooled on (fund, stock, time). Parameter estimates follow. Itsy bitsy R squareds.
@__paleologo They show how to roll up the expected price impact to the fund level. tada, an ETF's flows drive up the prices of its constituents. more parameter estimates and R squareds. moving along.
@__paleologo We arrive at the brass tacks: reversal. that's what we care about right? are these dumb ETF flow-driven price movements going to revert? Will my poor portfolio be hit by the dreaded crowding degross cascade?
@__paleologo So, they estimate reversal, and they find that initial price impact reverts 5 - 10 days with a long-run impact of 0.4. I am skeptical, but this is a refreshingly specific claim in crowding literature, so kudos for that.
@__paleologo Here's that more closely.
@__paleologo Okay, another systematic vs specific style decomp of ETF returns, this time into self-inflated from price pressure and fundamental. So conceptually, how much is susceptible to a feedback loop?
@__paleologo This brings us to the provocatively named "Ponzi flow", which is a feedback loop of continuous flow-driven trading induced by the price pressure of the fund's previous flows. Any risk manager's next logical question, does this cause a bubble and crash in the fund?
@__paleologo Finally, they find a statistically significant effect. BUT conclude that timing it is very difficult. And so I throw my hands up in the hair and weep, because this is basically all crowding research.
@__paleologo Crowding is something I think about often. This paper is interesting to me because it's modelable on higher resolution (days, at least) than 13F. And there are some empirical results that are decent (though I'm skeptical of the power/significance).
@__paleologo However, when I think about crowding I care less about the "what" and more about the "when". I think it's nice that we can confirm the intuition that frothy flows precipitate drawdowns, but I can't use that information without much better timing specificity than is shown here.
@__paleologo In any case, hope that helps. Like I said - interesting research direction, not groundbreaking, but I think it's worth a read and pondering.
@__paleologo And @systematicls I think you'll like some of this too
@__paleologo @systematicls Also @choffstein this might be relevant to your interests (probably less on the crowding side, more the ETF scene).
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@stevehouf @itsandrewgao @__paleologo @quantymacro I think quants are best understood in the broader context of the market. The purpose of a market is to facilitate transactions and ascertain the correct price of an asset. Thus those who "make the market" provide liquidity. Buyers need not find sellers to transact immediately.
@stevehouf @itsandrewgao @__paleologo @quantymacro To facilitate the liquidity, the makers must accept some risk, because they are offering to buy (sell) an asset when there isn't immediately another buyer (seller) to flip the asset. On the market making side, the quants involved here are preoccupied with understanding this risk
@stevehouf @itsandrewgao @__paleologo @quantymacro and hedging it, without compromising their speed requirements to stay competitive as a liquidity provider. They need to be able to do some forward projection of the price from the current state, up to seconds or minutes in the future.
A persistent challenge in this industry is people who hold on to half-understood ideas. This is especially common with stakeholders who are generally intelligent but not quants, because they pick up important concepts but don't devote the time to rigorously understand them.
Some examples are famous: "we don't want to do MVO for portfolio construction, because it's impossible to estimate the covariance matrix, and you'll end up with portfolios that don't make sense." They'll take these ideas and hold on to them, believing they've cracked the case.
Others are less famously contentious but still crop up all the time. "We don't need to model transaction cost or optimize our execution, we won't trade more than 3% of volume per day." Market impact _is_ the price movement! You always impact the market and it always matters!
@macrocephalopod "For each of the following questions, describe your answer as rigorously as you can.
Question 1: I tell you that I am regressing Y on X. Describe what I probably mean as rigorously and completely as you can, without sacrificing generality.
@macrocephalopod "Question 2: I have a univariate linear model of X and Y, both n x 1 column vectors. Describe what happens when I try to estimate this model with an exact copy of X added as a second exogenous variable, and why, as rigorously and completely as you can."
@macrocephalopod "Question 3: Suppose I estimated my initial univariate linear model with a Ridge penalty. Describe what this means in the language of norms. Be very specific and precise."
As a reminder, this is what momentum has looked like YTD. Nothing meaningful happened today, yesterday, or in the past week. All the commentary you're reading about momentum is worse than wrong - it's not falsifiable. You can manage your portfolio successfully by ignoring it.
- All the sell-side research you're getting about momentum is written by people who couldn't implement the momentum factor even if they were given an exact spec.
- The people at your firm talking about momentum are looking for a narrative and probably lack skepticism.
- "Factor rotations" can be safely ignored. If they can't be, your portfolio is taking bets on things for which you're incapable of adjudicating expected value. That is a failure to manage your portfolio, not a virtue. Don't compensate by paying attention to noise.
Very good thread. To extend this: signals that have positive correlation with future returns are still useful even if they don't surmount transaction costs on their own, because they lower volatility and costs when combined. I will show this via simulation.
Here's our setup, mathematically. We'll take normally distributed returns for convenience, n signals, t time
periods, c average cost (%), rho average corr between each signal and true return. This generates sample true returns and signals matching our target correlation.
In code we have the following. For simplicity assume we only hold a position within each time period, and c represents round-trip costs. Then we calculate net returns of each signal as the cumulative product of the signal times return, less costs.
Maiden Century is a good platform, but there are more clever ways to construct signals from typical sources of alternative data. Here is an example. Let's say you have some agnostic alt dataset that looks like this plot. How would you build an expected return signal from this?
Further suppose that the dataset is ticker-tagged for 200 equities which respond to the dataset in roughly similar ways. Maybe the economic basis of the data is shared by all the equities, or maybe investor trading behavior on the data unites them.
More formally we can say the alt dataset comprises an explanatory factor portfolio, and the equity returns have betas to that factor. That implies a covariance structure between the equities we can exploit for superior signal construction. So we continue simulating this data...