Softmax is one of the most commonly used functions in machine learning.

It is used to transform high-level features into probabilities. Based on the formula, it is hard to imagine how it is done exactly.

Softmax might not be what you think it is. Let's find out why!

🧵 👇🏽
First, we start with the exponential function eˣ, which transforms a real number into a positive one.

It has a feature that shows the geometry of this transformation: it turns addition into multiplication.

In particular, eᵃ ⁺ ᵇ = eᵃ eᵇ holds.
The input x = (x₁, x₂, ..., xₙ) consists of the highest level features: the class scores.

For two vectors x and y, xᵢ - yᵢ expresses the difference between features.

After the exponential function, this is transformed into their ratio.
To turn these transformed features into probabilities, we have to normalize them so that the sum is one.

This is why we divide with the sum of all exponentials in the definition of Softmax.
Although Softmax is very popular, there is a big issue with it.

Since exponentiation turns addition into multiplication, normalizing causes the function to map very different feature vectors to the same probability distributions.
If the features represent the class labels "cat", "dog", "kangaroo", the vector (1, 2, 3) intuitively means that it is a bit more likely "kangaroo" than the others.

(-10, -9, -8) signals that the input is neither. Yet, their Softmax is the same.

Why?
Because Softmax is invariant for translating each feature with the same value.

If this is not clear, check out the calculation below.
This is the reason why prediction uncertainty is hard to estimate with Softmax.

Overall, you have to be careful when using Softmax. Instead of thinking about the output as the true class probability distribution, view it as an estimation for which class is the most likely.
If you enjoyed this thread, consider following me and hitting a like/retweet on the first tweet of the thread!

I regularly post simple explanations of seemingly complicated concepts in machine learning, make sure you don't miss out on the next one!
An important point by @tpk_in: the exponential function can lead to overflow, so the translational invariance mentioned above can be useful for implementations.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

16 Apr
In the last 24 hours, more than 400 of you decided to follow me. Thank you, I am honored!

As you probably know, I love explaining complex machine learning concepts simply. I have collected some of my past threads for you to make sure you don't miss out on them.

Enjoy!
Read 16 tweets
15 Apr
In machine learning, the inner product (or dot product) of vectors is often used to measure similarity.

However, the formula is far from revealing. What does the sum of coordinate products have to do with similarity?

There is a very simple geometric explanation!

🧵 👇🏽
There are two key things to observe.

First, the inner product is linear in both variables. This property is called bilinearity.
Second, is that the inner product is zero if the vectors are orthogonal.
Read 9 tweets
13 Apr
Convolution is not the easiest operation to understand: it involves functions, sums, and two moving parts.

However, there is an illuminating explanation — with probability theory!

There is a whole new aspect of convolution that you (probably) haven't seen before.

🧵 👇🏽
In machine learning, convolutions are most often applied for images, but to make our job easier, we shall take a step back and go to one dimension.

There, convolution is defined as below.
Now, let's forget about these formulas for a while, and talk about a simple probability distribution: we toss two 6-sided dices and study the resulting values.

To formalize the problem, let 𝑋 and 𝑌 be two random variables, describing the outcome of the first and second toss.
Read 9 tweets
8 Apr
One of my favorite convolutional network architectures is the U-Net.

It solves a hard problem in such an elegant way that it became one of the most performant and popular choices for semantic segmentation tasks.

How does it work?

🧵 👇🏽
Let's quickly recap what semantic segmentation is: a common computer vision task, where we want to classify which class each pixel belongs to.

Because we want to provide a prediction on a pixel level, this task is much harder than classification.
Since the absolutely classic paper Fully Convolutional Networks for Semantic Segmentation by Jonathan Long, Evan Shelhamer, and Trevor Darrell, fully end-to-end autoencoder architectures were most commonly used for this.

(Image source: paper above, arxiv.org/abs/1411.4038v2)
Read 10 tweets
7 Apr
There is a common misconception that all probability distributions are like a Gaussian.

Often, the reasoning involves the Central Limit Theorem.

This is not exactly right: they resemble Gaussian only from a certain perspective.

🧵 👇🏽
Let's state the CLT first. If we have 𝑋₁, 𝑋₂, ..., 𝑋ₙ independent and identically distributed random variables, their scaled sum is a Gaussian distribution in the limit.

The surprising thing here is the limit is independent of the variables' distribution.
Note that the random variables undergo a significant transformation: averaging and scaling with the mean, the variance, and √𝑛.

(The scaling transformation is the "certain perspective" I mentioned in the first tweet.)
Read 12 tweets
1 Apr
Gradient descent sounds good on paper, but there is a big issue in practice.

For complex functions like training losses for neural networks, calculating the gradient is computationally very expensive.

What makes it possible? For one, stochastic gradient descent!

🧵 👇🏽
When you have a lot of data, calculating the gradient of the loss involves the computation of a large sum.

Think about it: if 𝑥ᵢ denotes the data and 𝑤 denotes the weights, the loss function takes the form below.
Not only do we have to add a bunch of numbers together, but we have to find the gradient of each loss term.

For example, if the model contains 10 000 parameters and 1 000 000 data points, we need to compute 10 000 x 1 000 000 = 10¹⁰ derivatives.

This can take a LOT of time.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!