Akshay ๐Ÿš€ Profile picture
Jul 1, 2023 โ€ข 11 tweets โ€ข 4 min read โ€ข Read on X
Object oriented programming is essential for writing clean & modular code!

Let's clearly understand OOPs with Python! ๐Ÿš€

A Thread ๐Ÿงต๐Ÿ‘‡
We break it down to 6 important concepts:

- Object ๐Ÿš˜
- Class ๐Ÿ—๏ธ
- Inheritance ๐Ÿงฌ
- Encapsulation ๐Ÿ”
- Abstraction ๐ŸŽญ
- Polymorphism ๐ŸŒ€

Let's take them one-by-one... ๐Ÿš€
1๏ธโƒฃ Object ๐Ÿš˜

Just look around, everything you see can be treated as an object.

For instance a Car, Dog, your Laptop are all objects.

An Object can be defined using 2 things:

- Properties: that describe an object
- Behaviour: the functions that an object can perform

...๐Ÿ‘‡
For example, a Car is an object that has properties such as color & model, and behaviours such as accelerating, braking & turning.

But, how do we create these objectsโ“๐Ÿค”

This is where we need to understand Classes!

...๐Ÿ‘‡
2๏ธโƒฃ Class ๐Ÿ—๏ธ

A class is like a blueprint for creating objects.

It defines a set of properties & functions (methods) that will be common to all objects created from the class.

So, we start with a simple example & follow along!

Let's define a class Car & create it's Object๐Ÿ‘‡
3๏ธโƒฃ Inheritance ๐Ÿงฌ

Let's say we want to create an Electric car & don't want to define all the properties and methods of the basic Car class.

Inheritance helps us to inherit all the properties/methods of parent class & add new ones or override existing.

Check this out๐Ÿ‘‡
4๏ธโƒฃ Encapsulation ๐Ÿ”

Encapsulation helps to bundle data and methods inside a class, restricting direct access to certain attributes and methods.

We use private attributes/methods (with a `_` or `__` prefix) to achieve this.

Here's an example ๐Ÿ‘‡
5๏ธโƒฃ Abstraction ๐ŸŽญ

This concept focuses on exposing only essential information to the outside world while hiding implementation details.

We use abstract classes and methods to define a common interface.

Here's an example ๐Ÿ‘‡
At this point if Abstraction and Encapsulation confuse you! ๐Ÿ‘‡

Abstraction conceals the implementation details, but doesn't hide the data itself.

On the other hand, Encapsulation hides the data and restricts unwanted use from external sources.

Cheers! ๐Ÿฅ‚
6๏ธโƒฃ Polymorphism ๐ŸŒ€

This allows us to use a single interface for different data types or classes.

We can achieve this through method overriding, where a subclass provides a different implementation for a method defined in its parent class.

Let's understand with an example ๐Ÿ‘‡
That's a wrap!

If you interested in:

- Python ๐Ÿ
- Data Science ๐Ÿ“ˆ
- Machine Learning ๐Ÿค–
- Maths for ML ๐Ÿงฎ
- MLOps ๐Ÿ› 
- CV/NLP ๐Ÿ—ฃ
- LLMs ๐Ÿง 

I'm sharing daily content over here, follow me โ†’ @akshay_pachaar if you haven't already!

Newletter:

Cheers! ๐Ÿฅ‚mlspring.beehiiv.com

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with Akshay ๐Ÿš€

Akshay ๐Ÿš€ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @akshay_pachaar

Oct 6
You're in a Research Scientist interview at OpenAI.

The interviewer asks:

"How would you expand the context length of an LLM from 2K to 128K tokens?"

You: "I will fine-tune the model on longer docs with 128K context"

Interview over.

Here's what you missed:
Extending the context window isn't just about larger matrices.

In a traditional transformer, expanding tokens by 8x increases memory needs by 64x due to the quadratic complexity of attention. Refer to the image below!

So, how do we manage it?

continue...๐Ÿ‘‡ Image
1) Sparse Attention

It limits the attention computation to a subset of tokens by:

- Using local attention (tokens attend only to their neighbors).
- Letting the model learn which tokens to focus on.

But this has a trade-off between computational complexity and performance. Image
Read 10 tweets
Sep 25
Local MCP clients are so underrated!

Everyone's using Cursor, Claude Desktop, and ChatGPT as MCP hosts, but if you're building your own apps that support MCP, you need custom clients.

Here's the problem: Writing MCP clients from scratch is painful and time-consuming.

Today, I'm showing you how to build custom MCP clients in minutes, not hours.

To prove this, I built a fully private, ultimate AI assistant that can:

- Connects to any MCP server
- Automates browser usage
- Scrapes web data seamlessly
- Controls the terminal of my computer
- Processes images, audio, and documents
- Remembers everything with knowledge graphs

The secret? mcp-use โ€” a 100% open-source framework that makes MCP integration trivial.

Building custom MCP agents takes 3 steps:

1. Define your MCP server configuration
2. Connect any LLM with the MCP client
3. Deploy your agent

That's it. No complex setup, no proprietary dependencies.

The best part? Everything runs locally. Your data stays private, and you control the entire stack.

Full breakdown with code...๐Ÿ‘‡
Let's break this down by exploring each integration and understanding how it works, using code and illustrations:
1๏ธโƒฃ Stagehand MCP server

We begin by allowing our Agent to control a browser, navigate web pages, take screenshots, etc., using @Stagehanddev MCP.

Below, I asked a weather query, and the Agent autonomously responded to it by initiating a browser session.

Check this๐Ÿ‘‡
Read 11 tweets
Sep 23
Context engineering, clearly explained!

Everybody is talking about context engineering, but no one tells you what it actually means.

Today, I'll explain everything you need to know about context engineering in a step-by-step manner.

Here's an illustrated guide:
So, what is context engineering?

Itโ€™s the art and science of delivering the right information, in the right format, at the right time, to your LLM.

Here's a quote by Andrej Karpathy on context engineering...๐Ÿ‘‡ Image
To understand context engineering, it's essential to first understand the meaning of context.

Agents today have evolved into much more than just chatbots.

The graphic below summarizes the 6 types of contexts an agent needs to function properly.

Check this out ๐Ÿ‘‡
Read 11 tweets
Sep 19
We've all dealt with activation functions while working with neural nets.

- Sigmoid
- Tanh
- ReLu & Leaky ReLu
- Gelu

Ever wondered why they are so importantโ“๐Ÿค”

Let me explain... ๐Ÿ‘‡ Image
Before we proceed, I want you to understand something!

You can think of a layer in a neural net as a function & multiple layers make the network a composite function.

Now, a composite function consisting of individual linear functions is also linear.

Check this out๐Ÿ‘‡ Image
We have a simple neural net that does binary classification.

Scenario 1:

- Linear decision boundary
- Linear Activation function

Observe how the neural net is able to quickly learn & loss converges to zero.

Watch this ๐Ÿ‘‡
Read 7 tweets
Sep 12
10 MCP, AI Agents & LLM visual explainers:

(don't forget to bookmark ๐Ÿ”–)
1๏ธโƒฃ MCP

MCP is a standardized way for LLMs to access tools via a clientโ€“server architecture.

Think of it as a JSON schema with agreed-upon endpoints.

Anthropic said, "Hey, let's all use the same JSON format when connecting AI to tools" and everyone said "Sure."

Check this๐Ÿ‘‡
2๏ธโƒฃ MCP vs Function calling for LLMs

Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs.

The visual covers how Function Calling & MCP work under the hood.

Check this out๐Ÿ‘‡
Read 12 tweets
Sep 11
I've put 100+ MCP apps into production!

There's one rule you can not miss if you want to do the same!

Here's the full breakdown (with code):
There are primarily 2 factors that determine how well an MCP app works:

- If the model is selecting the right tool?
- And if it's correctly preparing the tool call?

Today, let's learn how to evaluate any MCP workflow using @deepeval's MCP evaluations (open-source).

Let's go!
Here's the workflow:

- Integrate the MCP server with the LLM app.
- Send queries and log tool calls, tool outputs in DeepEval.
- Once done, run the eval to get insights on the MCP interactions.

Now let's dive into the code for this!
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(