If you're an engineer who's feeling hesitant or overwhelmed by the innovation pace of AI coding, this thread is for you.
Here's the 10% of fundamentals that will put you in the 90th percentile of AI engineers.
🧵/many
First, a crucial mindset shift: stop treating AI like a vending machine for code. Effective AI Engineering is IDE-native collaboration. It's a strategic partnership blending your insight with AI's capabilities.
Think of AI as a highly skilled (but forgetful) pair programmer.
The single biggest lever for better AI-generated code? Planning before AI writes any code. Frontload all relevant context -- files, existing patterns, overall goals. Then, collaboratively develop a strategy with your AI.
(this is why Cline has Plan/Act modes)
Why does planning work so well?
It ensures a shared understanding, so AI truly grasps what you're trying to achieve and its constraints.
This drastically improves accuracy and leads to more relevant code, massively reducing rework by catching misunderstandings early.
Invest time here; save 10x downstream.
Next up, you need to master the AI's "context window." This is its short-term memory, holding your instructions, code, chat history, etc.
It's finite. When it gets too full (often >50% for many models), AI performance can dip. It might start to "forget" earlier parts of your discussion.
Proactive context management is key to avoiding this.
Be aware of how full the window is. For long chats, use techniques to summarize the history (/newtask).
For extended tasks, break them down (/smol). Start "new tasks" or sessions, carrying over only essential, summarized context to keep the AI focused.
When it comes to choosing an AI model, simplify your approach by prioritizing models with strong reasoning, instruction following, and coding capabilities.
Top-tier models like Gemini 2.5 Pro or Claude 3.7 Sonnet are excellent starting points. Though expensive compared to less-performant models, most developers find the ROI worth it.
Don't skimp on model quality here.
While cheaper or smaller models can be fine for simple, isolated tasks, for intricate, multi-step AI engineering that relies on reliable tool use, investing in a more capable model usually pays off significantly in speed, quality, and reduced frustration.
Now, let's talk about giving your AI guidance so you stop re-explaining the same things every session. Use "Rules Files" -- essentially custom instructions -- to persistently guide AI behavior.
These can enforce your coding standards, define project context like tech stack or architecture, or automate common workflows.
Complement Rules Files with "Memory Banks." This is a pattern of creating structured project documentation (e.g., in a `memory-bank/` folder with files like `project_brief.md`, `tech_context.md`) that your AI reads at the start of sessions.
This allows the AI to "remember" critical project details, patterns, and decisions over time.
The payoff for implementing these "memory" systems is huge:
You get consistent AI behavior aligned with your project, a reduced need for repetitive explanations, and faster onboarding for new team members.
It’s a scalable way to manage knowledge as projects grow.
So, to recap the fundamentals that deliver outsized impact in AI engineering:
1. Collaborate strategically with your AI; don't just prompt. 2. Always plan WITH your AI before it codes. 3. Proactively manage the AI's context window. 4. Use capable models for complex, agentic work. 5. Give your AI persistent knowledge through Rules Files & Memory Banks.
Focus on these fundamentals, and you'll be understand the 10% of what matters in AI coding.
The goal is to build better software, faster.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We turned a 50-question PDF on LLMs into a 10-episode lecture series, with Cline orchestrating the entire process.
Here’s a look at the workflow that made it possible, using @GoogleDeepMind 2.5 Pro to process the PDF and @elevenlabsio MCP to generate the lectures. 🧵
It all started with this great resource on LLM basics shared by @omarsar0.
More Claude 4 optimizations, task timeline navigation, and CSV/XLSX support 🧵
We've been fine-tuning how Cline works with Claude 4, focusing on search/replace operations. The latest optimizations use improved delimiter handling that's showing great results in our testing.
The changes center on communication protocols -- we've adjusted how Cline formats instructions for Claude 4, using - and + delimiters instead of < and >, which aligns better with the model's training.
LLMs have static knowledge cutoffs. They don't know about library updates, new APIs, or breaking changes that happened after training. Context7 by @upstash bridges this gap by injecting real-time docs into Cline. 🧵
The reality: open-source libraries continue evolving past the knowledge cutoff date of even the latest frontier models. If you're using a library that updated in the last year, the AI is working with outdated information.
This leads to:
- Deprecated method suggestions
- Missing new features
- Incompatible syntax patterns
- Hours debugging "working" code
Cline doesn't index your codebase. No RAG, no embeddings, no vector databases.
This isn't a limitation -- it's a deliberate design choice. As context windows increase, this approach enhances Cline's ability to understand your code.
Here's why.
🧵
The industry default: chunk your codebase, create embeddings, store in vector databases, retrieve "relevant" pieces.
But code doesn't work in chunks. A function call in chunk 47, its definition in chunk 892, the context that explains why? Scattered everywhere.
We believe in the agentic power of the modesl, and with Claude's 200K+ context window, we don't need clever retrieval. We need intelligent exploration.
So Cline reads code the way you do -- following imports, tracing dependencies, building connected understanding.