I’ve totally made up this word but there’s not word for the concept it captures and I’ve been using it for over a year regardless.
If you use bollinger bands to trade mean reverting portfolios your lag error is the loss of alpha from the deterministic component.
This comes in 3 forms:
Jump risk:
Large jumps in the mean will take time for your mean to move to and cause errors because moving averages are lagged. This is a regime shifting ish problem and is aided by unsupervised learning models with conservatism controls.
…
The next is mismatched period:
If there is a sin wave with white noise we may attempt to use an MA to trade the noise part. This will give us lag error as we will not be accounting for the broader sim function and get lag error, hurting our PnL. Mismatched timeframe
…
Finally we have trend based lag error:
If your mean reverting portfolio is trending up and you are using bollinger bands your upper bands will be more likely to get hit, BUT they have negative alpha inherently built in because you are betting against the trend.
…
This form is annoying because you are more likely to take a negative alpha trade because of the trend.
Lag error always works against you because you get out at the mean, not after it (it mean reversion and you expect it to continue is just silly).
Lag error is a form of…
Persistence or non-mean reversion on a scale you fail to account for, or your model (like with jump error) believes something that is just a huge deviation is part of a persistent trend because of its size and the mean moves.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
A thread on all the components of latency, optimizations, & assumptions with modelling it.
...
This will primarily be for HFT, and focus on digital assets, but I will explain which parts are digital assets specific and which parts are not as much of it is generally applicable.
...
So what are the 3 "components" to our latency:
1. Our compute 2. The network 3. The matching engine
It’s the gains in performance you accumulate over time from tuning your strategy and improving it.
…
When it comes to non-HFT, accumulated improvement often leads to overfitting.
Continuing to tune the model once created often leads to decreases in performance other than simple re-fitting of the model on new data that has come out.
Let’s say a new trade has occurred on an exchange, if we have a latency edge we want to be one of the people incorporating it into price instead of one of the people reacting to price changes.
…
As we can see based on this below Pepe, a trade will cause an initial spike before a much slower levelling off.
Where it levels off to (relative to starting point) is going to be important to know as well as both the spike up and return points.
Fill probability analysis is primarily useful when optimizing maker/taker trades.
These are trades where the first leg we make into and then the rest of the legs are takers.
This is a limit order and then market orders the rest of the way (either limit IOC or market)
...
An example of this is triangular arbitrage, where we make into the first leg and then use taker orders to exit.
How can we estimate the probability of getting filled at any given level, and thus use this information to determine the optimal amount of spread to quote?
What is a mark? Well a mark is how we value something.
We can mark to model (our own subjective value of what something is worth), or mark to market (the current price), or mark to cost to close (market price with liquidity cost factored).
…
That’s mostly accounting systems though. Let’s talk about the relevant part for market making.