Thread incoming: Why China will win the AI race in 8-10 years (and why that might be good for the US) 🧵
The core reason: Chinese people are willing to trade privacy for convenience. Americans are paranoid about their data being sold and will go out of their way to make sure that their data isn't being shared.
This gives China a massive domestic market advantage - imagine training AI systems on a population that's comfortable sharing data at scale. It's like having a home field advantage in the biggest stadium in the world.
My DeepMind friends think it'll be sooner than 8-10 years.
Then there's state coordination.
Look at EVs: Xiaomi, BYD, etc. dominating globally. My Roborock vacuum puts Roomba to shame. When the state aligns with market forces, things move fast.
The talent pipeline is shifting too.
Those researchers getting $100M packages? Check their last names - we've been importing research talent for years through favorable education policies. But Chinese students now prefer Japan over the US for college.
Problem: Japan lags China for startup innovation. So most are heading home after graduation.
The timing math: 2025 grad student cohort + time to mature + US companies burning cash advantage on poaching = my 8-10 year prediction window.
Plot twist: China "winning" the AI race might actually be great for the US if we stay competitive at higher-level services. Not everything is zero-sum.
Also uhh since people are racist here, my colleague who originally posted this + myself are both American. We shop at Walmart, hoard toilet paper and gas, etc. This is not a political take, just pattern recognition.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
IEX Cloud is @DatabentoHQ's competitor, but my heart sank upon their shutdown. Here's my takeaways: 1. Sales > Product. It's about who has the best sales team, not about who has the best product (for better or worse). IEX Cloud relied purely on PLG, with 0 SLG. 🧵
2. Institutional >>> Retail. Selling to retail customers (or even retail platforms) isn't profitable.
3. Democratization comes at a cost. Funny enough, pricing out retail users goes against IEX's mission of democratizing the data industry. But sticking to retail traders is a failing formula. Kudos to IEX for choosing mission over profits.
"What do you do that AI can't do?" I worked in HFT. Below are the limitations of AI in my industry. 🧵
While AI has made enormous strides over the last decade, there are 8 reasons why AI hasn't consistently predicted the stock market. (It's like 8 floors of a dungeon that AI has to defeat before winning the game.)
1. Hidden information (unlike chess where all info is presented)
Here's a hard truth that quant firms can't say out loud: According to multiple sources from Tier 1 quant/HFT/prop shops, the number one indicator of success as a quant trader is the university they attended... for undergrad. 🧵
People will always disagree, and there are plenty of exceptions at Tier 1 firms. However, these are some of the best data scientists in the world. I'm inclined to believe them, because their livelihood depends on their ability to wrangle data and remove biases.
This is why Tier 1 quant shops recruit from schools like MIT & CMU. In contrast, Goldman Sachs recruits back-office roles from the University of Utah, BYU, & ASU, because data shows that students from these schools are more likely to succeed compared to students from MIT.
"Buy vs. build" is a classic problem many startups face. For example, on the hedge fund technology side (think: OMS, data feeds, market simulation, etc) there's some classic tradeoffs and risks to consider:
- Imagine you're looking for software or data, but nothing fits your criteria. Existing solutions match 70% of what you need, but there's frustrating gaps. So you consider building it yourself.
- That's what we did at Domeyard, but we ended up suffering some consequences:
- Building tech costs time, talent, and money, and we underestimated all of that. It took us 3 years to launch the fund, which is a lifetime in the startup world. I don't think I can survive going 3 years without a salary anymore.
(On HFT data streaming) The challenge is to build servers that provide a simple firehose AND more stateful capability to manage customized subscriptions and filter symbols. Here's how we did it:
1. Keep things simple and choose boring technology.
The funny thing is, coming from the HFT market making world, we weren't very familiar with the modern tech stack & frameworks for distributed streaming. In HFT, distributed processing is often an antipattern.
When we surveyed the landscape, many of the existing vendors were using Kubernetes, Kafka, WebSocket, multiple flavors of databases like kdb and Vertica, and needed a whole rack of servers. But we did our napkin math and thought all that was unnecessary.
#quant "What separates the top 3-5 firms from the tier 2 firms after them?" - a thread by my colleague
I've carried out a lot of #quant research interviews & ran a fund. I've noticed that junior candidates tend to obsess over vanity metrics like alpha, latency, AUM, Sharpe, etc. These are important, but here's 3 things that are probably more predictive of a firm's success.
When you look at tier 1 and 2 firms in any niche, they all have strong QRs and engineers, pedigree, similar tech, capital, shared knowledge of microstructure tricks. So you can't explain what sets them apart by simple vanity metrics—maybe you could have prior to 2012—but not now.