Varun Mohan Profile picture
Jun 3 1 tweets 1 min read Read on X
With less than five days of notice, Anthropic decided to cut off nearly all of our first-party capacity to all Claude 3.x models. Given the short notice, we may see some short-term Claude 3.x model availability issues as we have very quickly ramped up capacity on other inference providers, but we believe we have now secured sufficient near term capacity. We have been very clear to Anthropic that this is not our desire - we wanted to pay them for the full capacity. We are disappointed by this decision and short notice.

Gemini 2.5 Pro (now very high quality on Windsurf, new 0.75x promo rate), GPT 4.1, and more are all unaffected. We look forward to our continued partnership with all model providers.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Varun Mohan

Varun Mohan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @_mohansolo

Mar 12
There’s been a lot of talk recently about how Windsurf’s context retrieval is better than other products. One rebuttal I’ve seen is that all products “index your codebase”.

But indexing code ≠ context retrieval. It is necessary but not sufficient.

Thought I’d share a bit about what we’re doing under the hood to get the best results.
Indexing & embedding search is a tablestakes RAG technique. Btw, even for this technique there are approaches that make this more or less effective. One thing we are doing is AST parsing code and chunking along semantically meaningful boundaries - not random blocks of code. This means that when a code chunk is retrieved, it is a full function or class, not just an arbitrary block of consecutive code.
But embedding search becomes unreliable as a retrieval heuristic as the size of the codebase grows. Instead, we must rely on a combination of techniques like grep/file search, knowledge graph based retrieval, and more. With all these heuristics, a re-ranking step also becomes needed where the retrieved context is ranked in order of relevance. We use LLM based reranking under the hood.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(