Windsurf Profile picture
The Windsurf Editor. Tomorrow's Editor, Today. @WindsurfCurrent
May 16 6 tweets 3 min read
To train SWE-1, we had to create a data model and training recipe that took all of the complex states, tasks, and surfaces into consideration.

We then ran evals and experiments to evaluate performance against open and foundation models.

Here's what we did ↴ First, we evaluated how well the model could handle a user query mid-session.

Seamless collaboration with users on partially completed tasks is a crucial benchmark for model usefulness.

SWE-1 achieves near-parity with frontier models in helpfulness, accuracy, and edit quality. Image
Apr 30 6 tweets 1 min read
We asked our devs at Windsurf to share their thoughts on their favorite models and what they actually use them for.

Read their answers in the thread ↓ 3.7. Sonnet:

It’s proactive and confident but can do too much at times. Regardless, it is generally seen as the most capable.

“3.7 is just super agentic and eager to use tools and do things. I prefer stopping an over-eager model vs. coaxing an under-eager one.”
Apr 8 9 tweets 1 min read
Here our some of our favorite tips and tricks from the @windsurf_ai community!

Bookmark this and thank yourself later ↓ Slow Vibe Coding: Think, Plan, Prompt, Review, Validate and Start Again
Mar 7 9 tweets 2 min read
alright, MCP megathread 🧵

you should probably bookmark this ↓
Feb 23 7 tweets 2 min read
Let's discuss how Large Language Models (LLMs) handle codebase structure and parsing, and what makes Windsurf particularly cracked in this area.

While most AI code tools treat code as unstructured text, Windsurf leverages Abstract Syntax Trees (ASTs) to comprehend code at the syntactic level.

Here's why this results in faster, more accurate suggestions: 🧵👇 Unlike other tools that rely on embedding indexes—a one-size-fits-all retrieval method that doesn't scale well for large repos—Windsurf's agent employs strategies akin to human developers to locate necessary context:

- Grep and file search
- File relation traversal (e.g., AST parsing)
- Web search and online documentation
- Parallel LLM-based searches

This approach ensures efficient and scalable context retrieval.
Nov 17, 2024 6 tweets 2 min read
Copilots + Agents = Flows

The reason why Cascade feels like magic is because it combines the collaborative nature of copilots with the independently powerful capacity of agents.

Both Copilots and Agents are valuable, but not as much as Flows.

Let's break this down 🧵 Before the year 2022, humans and keyboards worked in unison, and code development was done completely manually. Every single line of code was a direct result of human input.