Akshay ๐Ÿš€ Profile picture
Jul 1, 2023 โ€ข 11 tweets โ€ข 4 min read โ€ข Read on X
Object oriented programming is essential for writing clean & modular code!

Let's clearly understand OOPs with Python! ๐Ÿš€

A Thread ๐Ÿงต๐Ÿ‘‡
We break it down to 6 important concepts:

- Object ๐Ÿš˜
- Class ๐Ÿ—๏ธ
- Inheritance ๐Ÿงฌ
- Encapsulation ๐Ÿ”
- Abstraction ๐ŸŽญ
- Polymorphism ๐ŸŒ€

Let's take them one-by-one... ๐Ÿš€
1๏ธโƒฃ Object ๐Ÿš˜

Just look around, everything you see can be treated as an object.

For instance a Car, Dog, your Laptop are all objects.

An Object can be defined using 2 things:

- Properties: that describe an object
- Behaviour: the functions that an object can perform

...๐Ÿ‘‡
For example, a Car is an object that has properties such as color & model, and behaviours such as accelerating, braking & turning.

But, how do we create these objectsโ“๐Ÿค”

This is where we need to understand Classes!

...๐Ÿ‘‡
2๏ธโƒฃ Class ๐Ÿ—๏ธ

A class is like a blueprint for creating objects.

It defines a set of properties & functions (methods) that will be common to all objects created from the class.

So, we start with a simple example & follow along!

Let's define a class Car & create it's Object๐Ÿ‘‡
3๏ธโƒฃ Inheritance ๐Ÿงฌ

Let's say we want to create an Electric car & don't want to define all the properties and methods of the basic Car class.

Inheritance helps us to inherit all the properties/methods of parent class & add new ones or override existing.

Check this out๐Ÿ‘‡
4๏ธโƒฃ Encapsulation ๐Ÿ”

Encapsulation helps to bundle data and methods inside a class, restricting direct access to certain attributes and methods.

We use private attributes/methods (with a `_` or `__` prefix) to achieve this.

Here's an example ๐Ÿ‘‡
5๏ธโƒฃ Abstraction ๐ŸŽญ

This concept focuses on exposing only essential information to the outside world while hiding implementation details.

We use abstract classes and methods to define a common interface.

Here's an example ๐Ÿ‘‡
At this point if Abstraction and Encapsulation confuse you! ๐Ÿ‘‡

Abstraction conceals the implementation details, but doesn't hide the data itself.

On the other hand, Encapsulation hides the data and restricts unwanted use from external sources.

Cheers! ๐Ÿฅ‚
6๏ธโƒฃ Polymorphism ๐ŸŒ€

This allows us to use a single interface for different data types or classes.

We can achieve this through method overriding, where a subclass provides a different implementation for a method defined in its parent class.

Let's understand with an example ๐Ÿ‘‡
That's a wrap!

If you interested in:

- Python ๐Ÿ
- Data Science ๐Ÿ“ˆ
- Machine Learning ๐Ÿค–
- Maths for ML ๐Ÿงฎ
- MLOps ๐Ÿ› 
- CV/NLP ๐Ÿ—ฃ
- LLMs ๐Ÿง 

I'm sharing daily content over here, follow me โ†’ @akshay_pachaar if you haven't already!

Newletter:

Cheers! ๐Ÿฅ‚mlspring.beehiiv.com

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with Akshay ๐Ÿš€

Akshay ๐Ÿš€ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @akshay_pachaar

Nov 23
Youโ€™re in an ML Engineer interview at Google.

Interviewer: We need to train an LLM across 1,000 GPUs. How would you make sure all GPUs share what they learn?

You: Use a central parameter server to aggregate and redistribute the weights.

Interview over.

Hereโ€™s what you missed:
One major run-time bottleneck in multi-GPU training happens during GPU synchronization.

For instance, in multi-GPU training via data parallelism:

- The same model is distributed to different GPUs.
- Each GPU processes a different subset of the whole dataset.

Check this ๐Ÿ‘‡
This leads to different gradients across different devices.

So, before updating the model parameters on each GPU device, we must communicate the gradients to all other devices to sync them.

Letโ€™s understand 2 common strategies next!
Read 14 tweets
Nov 21
NOBODY wants to send their data to Google or OpenAI.

Yet here we are, shipping proprietary code, customer information, and sensitive business logic to closed-source APIs we don't control.

While everyone's chasing the latest closed-source releases, open-source models are quietly becoming the practical choice for many production systems.

Here's what everyone is missing:

Open-source models are catching up fast, and they bring something the big labs can't: privacy, speed, and control.

I built a playground to test this myself. Used CometML's Opik to evaluate models on real code generation tasks - testing correctness, readability, and best practices against actual GitHub repos.

Here's what surprised me:

OSS models like MiniMax-M2, Kimi k2 performed on par with the likes of Gemini 3 and Claude Sonnet 4.5 on most tasks.

But practically MiniMax-M2 turns out to be a winner as it's twice as fast and 12x cheaper when you compare it to models like Sonnet 4.5.

Well, this isn't just about saving money.

When your model is smaller and faster, you can deploy it in places closed-source APIs can't reach:

โ†ณ Real-time applications that need sub-second responses
โ†ณ Edge devices where latency kills user experience
โ†ณ On-premise systems where data never leaves your infrastructure

MiniMax-M2 runs with only 10B activated parameters. That efficiency means lower latency, higher throughput, and the ability to handle interactive agents without breaking the bank.

The intelligence-to-cost ratio here changes what's possible.

You're not choosing between quality and affordability anymore. You're not sacrificing privacy for performance. The gap is closing, and in many cases, it's already closed.

If you're building anything that needs to be fast, private, or deployed at scale, it's worth taking a look at what's now available.

MiniMax-M2 is 100% open-source, free for developers right now. I have shared the link to their GitHub repo in the next tweet.

You will also find the code for the playground and evaluations I've done.
@MiniMax__AI GitHub repo for M2:

(don't forget to star ๐ŸŒŸ)
github.com/MiniMax-AI/Minโ€ฆ
@MiniMax__AI Find the code for the playground and the evaluation done using @Cometml Opik: github.com/patchy631/ai-eโ€ฆ
Read 4 tweets
Oct 27
Claude Skills might be the biggest upgrade to AI agents so far!

Some say it's even bigger than MCP.

I've been testing skills for the past 3-4 days, and they're solving a problem most people don't talk about: agents just keep forgetting everything.

In this video, I'll share everything I've learned so far.

It covers:

> The core idea (skills as SOPs for agents)
> Anatomy of a skill
> Skills vs. MCP vs. Projects vs. Subagents
> Building your own skill
> Hands-on example

Skills are the early signs of continual learning, and they can change how we work with agents forever!

Here's everything you need to know:
Skills vs. Projects vs. Subagents: Image
If you found it insightful, reshare with your network.

Find me โ†’ @akshay_pachaar โœ”๏ธ
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!
Read 4 tweets
Oct 25
I've been coding in Python for 9 years now.

If I were to start over today, here's a complete roadmap:
While everyone's vibecoding, a few truly understand what's actually happening.

This roadmap matters more now than ever.

So, let's dive in! ๐Ÿš€
1๏ธโƒฃ Python bootcamp by @freeCodeCamp

4 hours Python bootcamp with over 46M views!! It covers:

- Installing Python
- Setting up an IDE
- Basic Syntax
- Variables & Datatypes
- Looping in Python
- Exception handling
- Modules & pip
- Mini hands-on projects

Check this out๐Ÿ‘‡ Image
Read 9 tweets
Oct 20
You're in an ML Engineer interview at OpenAI.

The interviewer asks:

"Our GPT model generates 100 tokens in 42 seconds. How do you make it 5x faster?"

You: "I'll optimize the model architecture and use a better GPU."

Interview over.

Here's what you missed:
The real bottleneck isn't computeโ€”it's redundant computation.

Without KV caching, your model recalculates keys and values for each token, repeating work.

- with KV caching โ†’ 9 seconds
- without KV caching โ†’ 42 seconds (~5x slower)

Check this out๐Ÿ‘‡
So let's dive in and understand how KV caching works...๐Ÿ‘‡
Read 11 tweets
Oct 6
You're in a Research Scientist interview at OpenAI.

The interviewer asks:

"How would you expand the context length of an LLM from 2K to 128K tokens?"

You: "I will fine-tune the model on longer docs with 128K context"

Interview over.

Here's what you missed:
Extending the context window isn't just about larger matrices.

In a traditional transformer, expanding tokens by 8x increases memory needs by 64x due to the quadratic complexity of attention. Refer to the image below!

So, how do we manage it?

continue...๐Ÿ‘‡ Image
1) Sparse Attention

It limits the attention computation to a subset of tokens by:

- Using local attention (tokens attend only to their neighbors).
- Letting the model learn which tokens to focus on.

But this has a trade-off between computational complexity and performance. Image
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(