StatArb Profile picture
Mar 3 4 tweets 1 min read
Multithreading is not an instant speed booster:

A word on these three:

Async I/O
Multi-threaded CPU tasks
Hyper-threaded CPU tasks
Async I/O will only benefit speed-wise from the writing data component of the work and can slow you down if that is not significant. There is only one NAC remember, but there is not infinite cache and management is expensive.

1/3
Multi-threading is great, but not for file handling, that won't be faster and will split your cache once again so for ultra low latency applications you usually just have one super fast core enabled so you can maximise cache.

2/3
Hyper threading (thread per logical instead of core) can do +50% for some tasks, but -50% for others. Again it's all about the cache and the duration of the task (i.e. time to make and manage threads). Use the data structure wisely.

3/3

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with StatArb

StatArb Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TerribleQuant

Mar 3
What is really interesting is the high-level mathematics of neural networks. Ignoring activation functions, I mean shape. Two groundbreaking discoveries:

1) Inference is (non linearly) proportional to the NN depth and so is compute time.
2) Infinitely wide NN is equal to an SVM
1000 transistors are needed to multiply a number with digital systems. This just needs a resistor for analog signals. We already know NNs can be inaccurate and still work fine hence special float formats for them. Analog systems are the way forward as transistors hit the size...
of an atom. Nowadays we just pack more on a chip but densities haven't changed much. The opportunities that analog systems offer are worth an investigation. The brain is analog afterall and with Benford law we would need 40 years to 1mil times our compute to reach similar...
Read 4 tweets
Mar 2
Advanced algorithmic trading textbooks not in roadmap:

Quantitative Trading: Algorithms, Analytics, Data, Models, Optimization amazon.co.uk/dp/1498706487/…

1/6
That was in roadmap but I didn’t highlight it.

Algorithmic Trading Methods: Applications Using Advanced Statistics, Optimization, and Machine Learning Techniques amazon.co.uk/dp/0128156309/…

2/6
Trader Construction Kit: Fundamental & Technical Analysis, Risk Management, Directional Trading, Spreads, Options, Quantitative Strategies, Execution, Position Management, Data Science & Programming amazon.co.uk/dp/0997629517/…

3/6
Read 7 tweets
Feb 28
Elaborating on topological structure. A venn diagram is a good basic example. Any point in section B of the diagram == any other point in section B, but is completely different from something in section A. This is a topological structure. Decision trees effectively do this.
A lot of the time this non-linearity lets you pick up on subtleties but is incredibly prone to overfitting and actually overfits when there is a linear relationship because linear relationships (straight lines) aren't that topological. some point is in some standard form...
different from another point. Whereas they can be the same for decision trees. This is where decision trees benefit from using logistic regressions and linear regressions and not classifying at all. Just taking the linear regression. You get a more non-linear line, but
Read 8 tweets
Feb 28
Not sure how I feel about de prado. For some things they are just stupid, like what the fuck is triple barrier and other times they are genius. Like when he said t-values of microstructure features. He did a meh job on the explanation, and I prefer Hasbrouck's book, but GOAT idea
Anyways, have a skim through advanced is financial machine learning and tell for yourself. There are a few moments of genius and other times utter horseshit. Fractional differencing is genius, but it isn't explained properly and the stationarity of features etc is key.
some features do not need stationarity, a lot of them don't. Most wavelets need stationarity, but modern methods have non-stationary wavelets now! Here is a lecture by him I mildly agreed with:

Read 4 tweets
Feb 26
Heavily regularized Kernel PCA methods are awesome. Very often PCA isn't appropriate. Offset from midprice vs execution is one feature for an execution prediction model that would not have a good time with PCA because it is a parabola, but still a very simple U shape...
The issue with the introduction of polynomials or kernel methods for dimensionality reduction is that you can easily overfit, especially when noise is heavily present. Regularization should be proportional to noise and accuracy metrics. Another comment to add, is that PCA can...
destroy the structure of your data if you apply it to super HFT data. I have only given PCA so far as I don't want to go too deep into kernel methods, or give too much alpha away, but I will say that a great method is a state based model between linear, heavily regularized...
Read 10 tweets
Feb 25
Factor models are basically just if you tried to decompose returns into some linear regressions and then pretended it was ergodic it because of a “risk-premium”. Risk premiums don’t necessarily exist but with a lot of these there does exist increasing tail risk with…
better performance of said factors. As seen in 2008 the big brown line of momentum is eventually corrected but that is less of a risk premium reason and because of the august risk of multi-strat funds getting blown up in credit and then covering with momentum forcing… Image
Momentum the other way. However for other factors like quality and profitability there is little to no basis for it being a risk factor and the issue with assuming this level of ergodicity is that you may hold a position that no longer has alpha expecting to be compensated…
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(