Cline Profile picture
Autonomous coding agent right in your IDE. Join the Discord! https://t.co/QdaXOm1KB2
Jun 24 β€’ 7 tweets β€’ 3 min read
Claude Max subscribers:

You can use your subscription in Cline instead of paying per-token API pricing.

Here's how you can set it up 🧡 Setup is simple:

1. Install Claude Code following Anthropic's guide
2. In Cline: Settings API Configuration > Select "Claude Code"
3. Set the path to your path to Claude Code CLI (can be just "claude") Image
Jun 14 β€’ 7 tweets β€’ 2 min read
This chart from Artificial Analysis is the only thing you need to see to understand the state of AI in 2025.

The "best" model is a constantly moving target. 🧡 Image This is why we're seeing so much developer tool fatigue.

If your AI coding tool is locked into a single vendor, you're guaranteed to be falling behind the state-of-the-art for months at a time.
Jun 12 β€’ 8 tweets β€’ 3 min read
We actually didn't want to build a 'plan mode' in Cline. It went against our core principle of simplicity. But then we saw how our power users were using Cline, and it became clear we had to.

Some behind-the-scenes on Plan mode & why it's a critical paradigm in AI coding 🧡 Image Internally, and with our earliest users, we noticed a pattern. As the AI got more capable, people would instinctively say "wait, don't code yet" or "let me see a plan first." They needed a brake pedal for an AI that was too eager to help. Image
Jun 11 β€’ 8 tweets β€’ 2 min read
"Intelligence too cheap to meter is well within grasp"

Here's why we've been building Cline for this exact future -- where inference abundance, not scarcity, defines how developers work.

🧡 Image We made a deliberate choice:

While others architect around inference scarcity (caps, throttling, hidden operations), we built for abundance.

Why?
Jun 10 β€’ 5 tweets β€’ 2 min read
Just pushed Cline v3.17.12

This release includes free Grok 3 access for 2 weeks, collapsible MCP panels, better file context, and more reliable diff edits.

Full details below. 🧡 For the next two weeks, we're offering free Grok 3 access in partnership with xAI.
Jun 6 β€’ 6 tweets β€’ 2 min read
We turned a 50-question PDF on LLMs into a 10-episode lecture series, with Cline orchestrating the entire process.

Here’s a look at the workflow that made it possible, using @GoogleDeepMind 2.5 Pro to process the PDF and @elevenlabsio MCP to generate the lectures. 🧡 It all started with this great resource on LLM basics shared by @omarsar0.
Jun 4 β€’ 11 tweets β€’ 2 min read
Just shipped Cline v3.17.9 πŸ‘‡

More Claude 4 optimizations, task timeline navigation, and CSV/XLSX support 🧡 Image We've been fine-tuning how Cline works with Claude 4, focusing on search/replace operations. The latest optimizations use improved delimiter handling that's showing great results in our testing.
May 31 β€’ 7 tweets β€’ 2 min read
LLMs have static knowledge cutoffs. They don't know about library updates, new APIs, or breaking changes that happened after training. Context7 by @upstash bridges this gap by injecting real-time docs into Cline. 🧡 The reality: open-source libraries continue evolving past the knowledge cutoff date of even the latest frontier models. If you're using a library that updated in the last year, the AI is working with outdated information.
May 27 β€’ 9 tweets β€’ 2 min read
Cline doesn't index your codebase. No RAG, no embeddings, no vector databases.

This isn't a limitation -- it's a deliberate design choice. As context windows increase, this approach enhances Cline's ability to understand your code.

Here's why.

🧡 Image The industry default: chunk your codebase, create embeddings, store in vector databases, retrieve "relevant" pieces.

But code doesn't work in chunks. A function call in chunk 47, its definition in chunk 892, the context that explains why? Scattered everywhere.
May 26 β€’ 12 tweets β€’ 3 min read
ICYMI: Here's what's new in Cline over the past couple weeks 🧡

(& a cline wallpaper if you're so inclined) Image First up -- Task Timeline in v3.15. Now you can see exactly what Cline is doing with a visual "storyboard" right in your task header. Every tool call, every file edit, all laid out chronologically. Hover for details.
May 22 β€’ 6 tweets β€’ 2 min read
Anthropic's Claude 4 models are here.

Opus 4 and Sonnet 4 both show strong (& improved) coding abilities, with Sonnet 4 at 72.7% on SWE-bench and Opus 4 at 72.5%

What does this mean for developers using Cline? 🧡 Image Claude Opus 4 is engineered for complex, long-running agentic workflows. It's been trained to create and update 'memory files' (akin to context files in Cline), enhancing long-term task awareness. This power comes at a steep API cost ($15/$75 per 1M tokens). Image
May 16 β€’ 8 tweets β€’ 2 min read
The playbook for AI engineering is being written as we speak. Success isn't magic, it's method. For dev teams looking to build better software, faster with AI, here are 5 foundational pillars to consider. 🧡 Image Pillar 1: Understand What AI Engineering Really Is.
It's evolved beyond copy-pasting. True AI engineering is deep, IDE-native collaboration, blending human insight with AI capabilities. Think of AI as a brilliant, fast, but forgetful pair-programmer.
May 15 β€’ 19 tweets β€’ 4 min read
If you're an engineer who's feeling hesitant or overwhelmed by the innovation pace of AI coding, this thread is for you.

Here's the 10% of fundamentals that will put you in the 90th percentile of AI engineers.

🧡/many First, a crucial mindset shift: stop treating AI like a vending machine for code. Effective AI Engineering is IDE-native collaboration. It's a strategic partnership blending your insight with AI's capabilities.

Think of AI as a highly skilled (but forgetful) pair programmer.
May 13 β€’ 7 tweets β€’ 2 min read
Want to try Cline for free? @OpenRouter has free models that show a peek into the future of commodified inference.

Four models that are worth a try:

deepseek/deepseek-chat-v3-0324:free
meta-llama/llama-4-maverick:free
deepseek/deepseek-r1:free
qwen/qwen3-235b-a22b:free

🧡 Here's how you can get started using Cline with a free model from OpenRouter πŸ‘‡
May 12 β€’ 11 tweets β€’ 3 min read
AI coding performance often dips when context windows exceed ~50% fullness, leading to errors or sluggishness.

Cline uses built-in context awareness + a customizable .clinerule to automatically trigger the new_task tool, keeping performance optimal.

Here's how it works: 🧡 Image As you work with Cline, the context window fills up -- with your prompts, Cline's responses, file contents, tool outputs, etc. Think of it like RAM. More context can be good, but too much can overwhelm.
May 11 β€’ 12 tweets β€’ 4 min read
Just pushed Cline v3.15! Task timeline, Gemini Implicit Caching, Open-source docs, & much more

Here's what we've got for ya ↓ We've created a Task Timeline -- see Cline's workflow as a visual "storyboard" in your task header. Understand tool calls, file edits & more at a glance. Hover for instant summaries. Clarity for complex tasks, and built by one of our community contributors (thanks eomcaleb!)
May 9 β€’ 6 tweets β€’ 2 min read
Here's how you can use the @firecrawl_dev MCP server to turn online docs into a functional .clinerule in minutes.

This makes Cline a pro in any library you're using. The core idea is simple:
1. Provide the URL of the documentation you want to process.
2. The @firecrawl_dev MCP scrapes and extracts the key information.
May 7 β€’ 7 tweets β€’ 2 min read
The upgraded Gemini 2.5 Pro (03-25 --> 05-06) has been available for over 24 hours now, and we're already seeing some exciting feedback from our users putting it to the test.

So, what's the word on how it's actually performing in Cline?

(hint: very well) ↓ One of the first things users are noticing is a smoother experience with file edits. If you've wrestled with "diff edit" errors with complex changes or large files before, the new version seems to navigate these much more reliably. We're hearing it's "actually doing well with larger files" now.
May 7 β€’ 9 tweets β€’ 3 min read
We nearly decided against creating Plan & Act modes, but now we see planning as a cornerstone feature of Cline.

Effective AI coding isn't prescriptive -- it demands a shared strategy transitioning from one-shot Hail Marys to context-riche, agent-driven success:🧡 Image Here's what we noticed: LLMs can be a bit antsy. They're often quick to generate code, sometimes before fully grasping the nuanced context of your project. This can lead to that frustrating cycle of near-misses. We saw a need for a more deliberate approach.
May 6 β€’ 5 tweets β€’ 2 min read
Cline for Research?

Combine MCP servers like @perplexity_ai and @firecrawl_dev with .clinerules to create specialized research workflows you can toggle on/off.

Here's how you can combine MCP Servers & .clinerules to make specialized workflows 🧡 ↓ Our rule guides the process:
1. Refine topic? (Yes/No)
2. Choose method? (AI Search/Deep Crawl/etc)
3. Pick output? (Chat/MD/JSON)
4. Cline executes via the chosen MCP

You just need to click the right buttons. cline-for-research.mdImage
May 3 β€’ 13 tweets β€’ 3 min read
Cline v3.14 is live.

Here's what's new (mega 🧡) ↓ Image First up, significant improvements to Gemini caching & cost transparency:
- Refined caching logic for Gemini/Vertex for better reliability & cost savings.
- Added Cache UI for OpenRouter/Cline providers.
- Enabled pricing calculations for Gemini/Vertex. Image