Image classification is one of the most common & important computer vision tasks.

In image classification, we are mainly identifying the category of a given image.

Let's talk more about this important task 🧵🧵
Image classification is about recognizing the specific category of the image from different categories.

Take an example: Given an image of a car, can you make a computer program to recognize if the image is a car?
One might ask why we even need to make computers recognize the images. He or she would be right.

Humans have an innate perception system. Identifying or recognizing the objects seems to be a trivial task for us.

But for computers, it's a different story. Why is that?
Computers only understand numbers. When you look at a car, you know it's a car. When you feed a car image to a computer, it only sees numbers or pixel values.

To a computer, images are just numbers!

(Pixels values in the image below are just an example)
The fact that computers only see numbers make it hard for them to recognize similar images that are in different conditions such as color, or scene difference.

As you might also guess, that's the reason why we need varieties in the training images.
Image classification has enabled many real-world applications such as medical disease diagnosis, where we may for example take a medical scan image, and identify if there is a presence of a particular disease.
Image classification is also useful in many other tasks such as crops classification, food classification(see nutrify.app by @mrdbourke), visual similarity search, product tagging, face recognition, etc...

As you can see, image classification is an important task.
Also, image classification is the heart of other computer vision tasks such as object detection.

In object detection, we recognize the objects that are present in the image, localize them, and draw the bounding boxes around them.
There are 3 main types of classification problems that are:

◆Binary image classification
◆Multi-label classification
◆Multi-class classification

Let's discuss them...
1. BINARY IMAGE CLASSIFICATION

In binary image classification, we are dealing with 2 categories.

A well-known example of this type is cats and dogs classification. Given the image of a cat or dog, recognize if the image contains a cat or dog.
2. MULTI-LABEL CLASSIFICATION

Multi-label classification is a special type. In this type, a single image can belong to more than 1 category.

Ex: Given an image of a vehicle, recognize its vehicle type, but also identify the model of such type(among other different models).
3. MULTI-CLASS CLASSIFICATION

Multi-class classification is very common. It is concerned with determining the category of the image from multiple categories(ideally, more than 2).

Ex: 10 fashions classification, rock-paper-scissor classification, etc....
This is the end of the thread that was about image classification.

Image classification is really an important task. It has enabled a wide range of real-world applications in many industries such as medicine, agriculture, manufacturing industries, etc...
We saw that there are 3 main types of classification tasks: Binary image classification, multi-label, and multi-class classification.
Thanks for reading.

I regularly write about machine learning and deep learning ideas. My goal is to simplify complex concepts.

If you haven't done it yet, follow @Jeande_d for more machine learning things!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jean de Nyandwi

Jean de Nyandwi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Jeande_d

21 Nov
Machine Learning Weekly Highlights 💡

Made of:

◆2 things from me
◆2 from other creators
◆2+1 from the community

A thread 🧵
This week, I wrote about activation functions and why they are important components of neural networks.

Yesterday, I also wrote about image classification, one of the most important computer vision tasks.
#1

Here is the thread about activation functions

Read 12 tweets
17 Nov
Activations functions are one of the most important components of any typical neural network.

What exactly are activation functions, and why do we need to inject them into the neural network?

A thread 🧵🧵
Activations functions are basically mathematical functions that are used to introduce non linearities in the network.

Without an activation function, the neural network would behave like a linear classifier/regressor.
Or simply put, it would only be able to solve linear problems or those kinds of problems where the relationship between input and output can be mapped out easily because input and output change in a proportional manner.

Let me explain what I mean by that...
Read 27 tweets
14 Nov
Machine Learning Weekly Highlights 💡

◆3 things from me
◆2 things from other people and
◆2 from the community

🧵🧵
This week, I wrote about what to consider while choosing a machine learning model for a particular problem, early stopping which is one of the powerful regularization techniques, and what to know about the learning rate.

The next is their corresponding threads!
1. What to know about a model selection process...

Read 13 tweets
12 Nov
Learning rate is one of the most important hyperparameters to adjust well during the ML model training.

A high learning rate can speed up the training, but it can cause the model to diverge. A low rate can slow the training.

Here are different learning rate curves Image
A low learning rate can also give poor results.

A good recommended practice is to usually start with a high rate and then reduce it accordingly.

There are many techniques that can be used to achieve that. They are called learning rate schedulers.
Example of learning rate scheduling techniques:

◆Power scheduler
◆Exponential scheduler
◆Piecewise constant or multi-factor scheduler
◆Performance scheduler
◆Cosine schedule
Read 4 tweets
11 Nov
The initial loss value that you should expect to get when using softmax activation in the last layer of the neural network:

Initial loss = ln(number_of_classes), ln being a natural logarithm.
Example:

last_layer = api.layers.dense(10, activation='softmax')

# number of classes = 10
initial_loss = ln(10) #2.302

Understanding this is important when it comes to debugging the network. If you see a loss of 4.5 when you have 10 classes, there is something wrong.
Also, the reported loss on the first training epoch is the average loss of the whole batch.

Thus, you may instead get the initial loss less than ln(number_of_classes) because you are training in batches. And it is a good thing.
Read 4 tweets
10 Nov
The below illustration shows early stopping, one of the effective and simplest regularization techniques used in training neural networks.

A thread on the idea behind early stopping, why it works, and why you should always use it...🧵 Image
Usually, during training, the training loss will decrease gradually, and if everything goes well on the validation side, validation loss will decrease too.

When the validation loss hits the local minimum point, it will start to increase again. Which is a signal of overfitting. Image
How can we stop the training just right before the validation loss rise again? Or before the validation accuracy starts decreasing?

That's the motivation for early stopping.

With early stopping, we can stop the training when there are no improvements in the validation metrics. Image
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!

:(