Data similarity has such a simple visual interpretation that it will light all the bulbs in your head.
The mathematical magic tells you that similarity is given by the inner product. Have you thought about why?
This is how elementary geometry explains it all.
↓ A thread. ↓
Let's start in the beginning!
In machine learning, data is represented by vectors. So, instead of observations and features, we talk about tuples of (real) numbers.
Vectors have two special functions defined on them: their norms and inner products. Norms simply describe their magnitude, while inner products describe
.
.
.
well, a 𝐥𝐨𝐭 of things.
Let's start with the fundamentals!
First of all, the norm can be expressed in terms of the inner product.
Moreover, the inner product is linear in both variables. (Check these by hand if you don't believe me.)
Bilinearity gives rise to a geometric interpretation of the inner product.
If we form an imaginary triangle from 𝑥, 𝑦, and 𝑥+𝑦, we can express the inner product in terms of the sides' length.
(Even in higher dimensions, we can form this triangle. It'll be just on a two-dimensional subspace.)
However, applying the law of cosines, we obtain yet another way of expressing the length of 𝑥+𝑦, this time in terms of the other sides and the angle enclosed by them.
Putting these together, we see that the inner product of 𝑥 and 𝑦 is the product of
• the norm of 𝑥,
• the norm of 𝑦,
• and the cosine of their enclosed angle!
If we scale down 𝑥 and 𝑦 to unit lengths, their inner product simply gives the cosine of the angle.
You might know this as cosine similarity.
For data points, the closer it is to 1, the more the features move together.
Inner products play an essential part in data science and machine learning.
Because of this, they are the main topic of the newest chapter of my book, The Mathematics of Machine Learning. Each week, I release a new chapter, just as I write them.
If I toss a fair coin ten times and it all comes up heads, what is the chance that the 11th toss will also be heads? Many think that it'll be highly unlikely. However, this is incorrect.
Here is why!
↓ A thread. ↓
In probability theory and statistics, we often study events in the context of other events.
This is captured by conditional probabilities, answering a simple question: "what is the probability of A if we know that B has occurred?".
Without any additional information, the probability that eleven coin tosses result in eleven heads in a row is extremely small.
However, notice that it was not our case. The original question was to find the probability of the 11th toss, given the result of the previous ten.
The early access of my Mathematics of Machine Learning book is launching today!
One chapter per week, we go from basics to the internals of neural networks. We are starting with vector spaces, the scene where machine learning happens.
Here is why they are so important!
🧵 👇🏽
As you probably know, data is represented by vectors.
Data points are just tuples of measurements. In their raw form, they are hardly useful for us. They are just blips in space.
Without operations and transformations, it is difficult to predict class labels or do anything else.
Vector spaces provide a mathematical structure where operations naturally arise.
Instead of a blip, just imagine an arrow pointing to the data point from a fixed origin.
Even though most of us are introduced to the subject through this example, fitting functions to a training dataset seemingly doesn't give us any deep insight about the data.
This is what's working behind the scenes!
🧵 👇🏽
Consider a simple example: predicting the value 𝑦 from the observation 𝑥; for instance 𝑦-s are real estate prices based on the square footage 𝑥.
If you are a visual person, this is how you can imagine such dataset.
The first thing one would do is to fit a linear function 𝑓(𝑥) = 𝑎𝑥 + 𝑏 on the data.
By looking at the result, we can see that something is not right. Sure, it might capture the mean value for a given observation, but the variance and the noise in the data is not explained.
How to build a good understanding of math for machine learning?
I get this question a lot, so I decided to make a complete roadmap for you. In essence, three fields make this up: calculus, linear algebra, and probability theory.
Let's take a quick look at them!
🧵 👇
1. Linear algebra.
In machine learning, data is represented by vectors. Essentially, training a learning algorithm is finding more descriptive representations of data through a series of transformations.
Linear algebra is the study of vector spaces and their transformations.
Simply speaking, a neural network is just a function mapping the data to a high-level representation.
Linear transformations are the fundamental building blocks of these. Developing a good understanding of them will go a long way, as they are everywhere in machine learning.
You might be surprised, but I gained a lot from playing games. Board games, video games, all of them. Playing is a free-time activity, but it can teach a lot about life and work.
This thread is about the most important lessons I learned.
1. Taking responsibility for your mistakes.
Mistakes are the best way to learn, but you can do so by taking responsibility instead of looking for excuses. Stop blaming bad luck, lag, teammates, or anything else.
Be your own critic and identify where you can improve.
2/8
2. Actively focus on improvement.
Contrary to popular belief, "just doing it" is not an effective way to learn. Identifying flaws in your game, setting progressive goals, and keeping yourself accountable relentlessly supercharges the process. Play (work) with purpose.