In machine learning, the inner product (or dot product) of vectors is often used to measure similarity.

However, the formula is far from revealing. What does the sum of coordinate products have to do with similarity?

There is a very simple geometric explanation!

🧵 👇🏽
There are two key things to observe.

First, the inner product is linear in both variables. This property is called bilinearity.
Second, is that the inner product is zero if the vectors are orthogonal.
With these, given an 𝑦, we can decompose 𝑥 into two components: one is orthogonal, while the other is parallel to 𝑦.

So, because of the bilinearity, the inner product equals to the inner product of 𝑦 and the parallel component of 𝑥.
If we write 𝑦 as a scalar multiple of 𝑥, we can see that their inner product can be expressed in terms of the magnitude of 𝑦 and the scalar.
In addition, if we assume that 𝑥 and 𝑦 have unit magnitude, the inner product is even simpler: it describes the scaling factor between 𝑦 and the orthogonal projection of 𝑥 onto 𝑦.

Note that this factor is in [-1, 1]. (It is negative if the directions are opposite.)
There is more. Now comes the really interesting part!

Let's denote the angle between 𝑥 and 𝑦 by α. The scaling factor r equals the cosine of α!

(Recall that we assume that 𝑥 and 𝑦 have unit magnitude.)
If the vectors don't have unit magnitude, we can simply scale them.

The inner product of the scaled vectors is called cosine similarity.

This is probably how you know this quantity. Now you see why!
If you have liked this thread, consider following me and hitting a like/retweet on the first tweet of the thread!

I regularly post simple explanations of seemingly complicated concepts in machine learning, make sure you don't miss out on the next one!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

16 Apr
In the last 24 hours, more than 400 of you decided to follow me. Thank you, I am honored!

As you probably know, I love explaining complex machine learning concepts simply. I have collected some of my past threads for you to make sure you don't miss out on them.

Enjoy!
Read 16 tweets
13 Apr
Convolution is not the easiest operation to understand: it involves functions, sums, and two moving parts.

However, there is an illuminating explanation — with probability theory!

There is a whole new aspect of convolution that you (probably) haven't seen before.

🧵 👇🏽
In machine learning, convolutions are most often applied for images, but to make our job easier, we shall take a step back and go to one dimension.

There, convolution is defined as below.
Now, let's forget about these formulas for a while, and talk about a simple probability distribution: we toss two 6-sided dices and study the resulting values.

To formalize the problem, let 𝑋 and 𝑌 be two random variables, describing the outcome of the first and second toss.
Read 9 tweets
8 Apr
One of my favorite convolutional network architectures is the U-Net.

It solves a hard problem in such an elegant way that it became one of the most performant and popular choices for semantic segmentation tasks.

How does it work?

🧵 👇🏽
Let's quickly recap what semantic segmentation is: a common computer vision task, where we want to classify which class each pixel belongs to.

Because we want to provide a prediction on a pixel level, this task is much harder than classification.
Since the absolutely classic paper Fully Convolutional Networks for Semantic Segmentation by Jonathan Long, Evan Shelhamer, and Trevor Darrell, fully end-to-end autoencoder architectures were most commonly used for this.

(Image source: paper above, arxiv.org/abs/1411.4038v2)
Read 10 tweets
7 Apr
There is a common misconception that all probability distributions are like a Gaussian.

Often, the reasoning involves the Central Limit Theorem.

This is not exactly right: they resemble Gaussian only from a certain perspective.

🧵 👇🏽
Let's state the CLT first. If we have 𝑋₁, 𝑋₂, ..., 𝑋ₙ independent and identically distributed random variables, their scaled sum is a Gaussian distribution in the limit.

The surprising thing here is the limit is independent of the variables' distribution.
Note that the random variables undergo a significant transformation: averaging and scaling with the mean, the variance, and √𝑛.

(The scaling transformation is the "certain perspective" I mentioned in the first tweet.)
Read 12 tweets
1 Apr
Gradient descent sounds good on paper, but there is a big issue in practice.

For complex functions like training losses for neural networks, calculating the gradient is computationally very expensive.

What makes it possible? For one, stochastic gradient descent!

🧵 👇🏽
When you have a lot of data, calculating the gradient of the loss involves the computation of a large sum.

Think about it: if 𝑥ᵢ denotes the data and 𝑤 denotes the weights, the loss function takes the form below.
Not only do we have to add a bunch of numbers together, but we have to find the gradient of each loss term.

For example, if the model contains 10 000 parameters and 1 000 000 data points, we need to compute 10 000 x 1 000 000 = 10¹⁰ derivatives.

This can take a LOT of time.
Read 7 tweets
31 Mar
Gradient descent has a really simple and intuitive explanation.

The algorithm is easy to understand once you realize that it is basically hill climbing with a really simple strategy.

Let's see how it works!

🧵 👇🏽
For functions of one variable, the gradient is simply the derivative of the function.

The derivative expresses the slope of the function's tangent plane, but it can also be viewed as a one-dimensional vector!
When the function is increasing, the derivative is positive. When decreasing, it is negative.

Translating this to the language of vectors, it means that the "gradient" points to the direction of the increase!

This is the key to understand gradient descent.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!