1/ This is the Strava heatmap of running around the world. It's truly fascinating, and gets to the core of how certain places, particularly European cities, are designed for outdoor activity while others aren't. Let's look more closely
2/ Let's look at the Bay Area and New York City to get a glimpse of US cities. The Bay, despite being sprawled out, has running routes connecting the entire peninsula down to South Bay.
NYC is denser, but people run everywhere - park, coast, bridges, even in the middle of town.
3/ Cities in Europe - Paris, Barcelona and London, for example - follow similar patterns - several core runnable boulevards with many tributaries that encompass the entire landscape.
4/ "European" cities outside Europe, like Tel Aviv and Istanbul are Tel Aviv are similar. Beautiful coastal runs with smaller lanes branching out into the city. Gorgeous, well planned places.
5/ Israel, in my experience, is a place known for being well spread out and not having clusters of big cities. The data backs this up, and Israel is one of the only countries I found which seem entirely runnable, not just in pockets.
6/ This gets me to my core point. I've always been annoyed by Indian cities because, regrettably, they're made up of pockets of activity and un-runnable roads - like ill-fitting puzzle pieces.
7/ We see little islands of planned / closed neighborhoods or lakes that are far removed from each other. The spines of the city aren't pedestrian-friendly at all.
8/ If you think density is to blame, that's not true. Dense Asian cities can be very well connected. Take a look at Tokyo, Shanghai or Singapore for example. Even in comparison to their European counterparts, there's less prominent spines and just a web of interconnected roads.
9/ Even though Strava data is biased towards countries with more smartphones, the story it paints, empirically, rings true. I wish more Indian cities strived to be more runnable.
My favorite city though was this one: with one singular long continuous run by the river. Guess?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This new DeepMind research shows just how broken vector search is.
Turns out some docs in your index are theoretically incapable of being retrieved by vector search, given a certain dimension count of the embedding.
Plain old BM25 from 1994 outperforms it on recall.
1/4
This result gives me a lot of joy as a search nerd for more than a decade.
Haters will say that the dataset the authors created, LIMIT, is synthetic and unrealistic, but this has been my observation building search systems at Google / Glean.
Vector Search was popularized as an approachable drop-in search since OpenAI embeddings grew in popularity, but has clear limitations in production settings.
Even aside from this result, showing it just misses certain docs constantly, it
– doesn't search for concepts well
– often retrieves similar but unrelated results
– doesn't account for non-content signals of similarity (recency, popularity)
3/4
I'm using GPT5 Pro to find me the best stocks and startup investments.
Asked it to use modern portfolio theory and size investments.
—Top Privates [+9.7%]: Databricks, Stripe, Anthropic, SpaceX
—Top Publics [+14.2%]: Nvidia, TSMC, Microsoft, Meta
Just put $1000 into the stocks!
Prompt: "Check all public / private stock market companies and tell me what I should invest in from first principles reasoning. You have $1000.
Please do deep research and present rationale for each investment. Each one should have a target price and expected value. Use advanced math for trading. Draw research from authoritative sources like research and unbiased pundits. Size my bets properly and use everything you know about portfolio theory. Corroborate each decision with a list of predictions about those companies.
Your goal is to maximize expected value. Make minimum 5 investments. Write it in a table."
This follows my previous experiment on Polymarket, which seemingly had ~2-4x the expected returns!
And yes, I know they’ve always reported on the 477 denominator, but that’s NOT “SWE-Bench verified”, that’s an entirely different metric, it’s “OpenAI’s subset of SWE Bench Verified” and that number can’t be compared
Microsoft just leaked their official compensation bands for engineers.
We often forget that you can be a stable high-performing engineer with
great work-life balance, be a BigTech lifer and comfortably retire with a net worth of ~$15M!
The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
1. Generating tokens by rewriting high-quality tokens with LLMs in pre-training 2. Mining 3000+ MCPs and using LLM-generated personas to improve agentic tool calling 3. 10,000 parallel Kubernetes sandboxes to solve Github issues 4. New scaling laws for sparsity in MoE models
5. RL with verifiable rewards (RLVR) for math, coding, safety with self-critique model with long-reasoning penalty, causing direct, desisive answers 6. Training recipe of 4k sequences, then 32k then 128k with YaRN 7. High temp during initial RL training to promote exploration