We actually didn't want to build a 'plan mode' in Cline. It went against our core principle of simplicity. But then we saw how our power users were using Cline, and it became clear we had to.
Some behind-the-scenes on Plan mode & why it's a critical paradigm in AI coding 🧵
Internally, and with our earliest users, we noticed a pattern. As the AI got more capable, people would instinctively say "wait, don't code yet" or "let me see a plan first." They needed a brake pedal for an AI that was too eager to help.
We were hesitant. Adding modes adds complexity, and we want Cline to feel intuitive. We debated it back in January, worried about forcing a specific workflow. The goal is to keep Cline feeling light and unopinionated.
But we realized this wasn't about forcing a workflow. It was about creating space for the most critical part of development: human intent. The AI can write the code, but only the human knows the WHY. Plan Mode became a necessary space for that dialogue.
So we stopped thinking of it as a "mode" and started seeing it as a paradigm. A deliberate separation between gathering intent (Plan) and executing on it (Act). It’s the same powerful AI throughout, just a different phase of collaboration.
(early concept)
What started as a reluctant feature has become a cornerstone of the Cline experience. It turns out that giving users a dedicated space to align with the AI before it acts leads to dramatically better outcomes.
It's been validating to see the broader AI coding community adopt similar planning phases recently. It confirms what we learned months ago: as AI gets more powerful, the space we create for human intent matters more than ever.
We turned a 50-question PDF on LLMs into a 10-episode lecture series, with Cline orchestrating the entire process.
Here’s a look at the workflow that made it possible, using @GoogleDeepMind 2.5 Pro to process the PDF and @elevenlabsio MCP to generate the lectures. 🧵
It all started with this great resource on LLM basics shared by @omarsar0.
More Claude 4 optimizations, task timeline navigation, and CSV/XLSX support 🧵
We've been fine-tuning how Cline works with Claude 4, focusing on search/replace operations. The latest optimizations use improved delimiter handling that's showing great results in our testing.
The changes center on communication protocols -- we've adjusted how Cline formats instructions for Claude 4, using - and + delimiters instead of < and >, which aligns better with the model's training.
LLMs have static knowledge cutoffs. They don't know about library updates, new APIs, or breaking changes that happened after training. Context7 by @upstash bridges this gap by injecting real-time docs into Cline. 🧵
The reality: open-source libraries continue evolving past the knowledge cutoff date of even the latest frontier models. If you're using a library that updated in the last year, the AI is working with outdated information.
This leads to:
- Deprecated method suggestions
- Missing new features
- Incompatible syntax patterns
- Hours debugging "working" code
Cline doesn't index your codebase. No RAG, no embeddings, no vector databases.
This isn't a limitation -- it's a deliberate design choice. As context windows increase, this approach enhances Cline's ability to understand your code.
Here's why.
🧵
The industry default: chunk your codebase, create embeddings, store in vector databases, retrieve "relevant" pieces.
But code doesn't work in chunks. A function call in chunk 47, its definition in chunk 892, the context that explains why? Scattered everywhere.
We believe in the agentic power of the modesl, and with Claude's 200K+ context window, we don't need clever retrieval. We need intelligent exploration.
So Cline reads code the way you do -- following imports, tracing dependencies, building connected understanding.