There is one big reason we love the logarithm function in machine learning.
Logarithms help us reduce complexity by turning multiplication into addition. You might not know it, but they are behind a lot of things in machine learning.
Here is the entire story.
🧵 👇🏽
First, let's start with the definition of the logarithm.
The base 𝑎 logarithm of 𝑏 is simply the solution of the equation 𝑎ˣ = 𝑏.
Despite its simplicity, it has many useful properties that we take advantage of all the time.
You can think of the logarithm as the inverse of exponentiation.
Because of this, it turns multiplication into addition. Exponentiation does the opposite: it turns addition into multiplication.
(The base is often assumed to be a fixed constant. Thus, it can be omitted.)
Why is this useful? For calculating gradients and derivatives!
Training a neural network requires finding its gradient. However, lots of commonly used functions are written in terms of products.
As you can see, this complicates things.
By taking the logarithm, we can compute the derivative as it turns products into sums.
This method is called logarithmic differentiation.
Since the logarithm is increasing, maximizing a function is the same as maximizing its logarithm. (Same with minimization.)
One example where this is useful is the maximum likelihood estimation.
Given a set of observations and a predictive model, we can write this in the following form.
Believe it or not, this is behind the mean squared error.
Every time you use this, logarithms are working in the background.
If you enjoyed this thread and want to see behind the curtain of machine learning, I am writing a book for you, where we go from high school math to neural networks, one step at a time.
The early access for Mathematics of Machine Learning is out now!
More applications of logarithms: transforming data for visualization. This is extremely useful in life sciences, where the scale of features is exponential.
🤔 Should you learn mathematics for machine learning?
Let's do a thought experiment! Imagine moving to a new country without speaking the language and knowing the way of life. However, you have a smartphone and a reliable internet connection.
How do you start exploring?
1/8
With Google Maps and a credit card, you can do many awesome things there: explore the city, eat in nice restaurants, have a good time.
You can do the groceries every day without speaking a word: just put the stuff in your basket and swipe your card at the cashier.
2/8
After a few months, you'll start to pick up some language as well—simple things, like saying greetings or introducing yourself. You are off to a good start!
There are built-in solutions for common tasks that just work. Food ordering services, public transportation, etc.
3/8
Matrices are the basic building blocks of learning algorithms.
Multiplying the data vectors with a matrix is equivalent to transforming the feature space. We think about this as a "black box", but there is a lot to discover.
For one, how they change the volume of objects.
This is described by the determinant of the matrix, which is given by
• how the transformation scales the volume,
• and how it changes the orientation of basis vectors.
The determinant is given by the formula below. I am a mathematician, and even I find this intimidating.
Data similarity has such a simple visual interpretation that it will light all the bulbs in your head.
The mathematical magic tells you that similarity is given by the inner product. Have you thought about why?
This is how elementary geometry explains it all.
↓ A thread. ↓
Let's start in the beginning!
In machine learning, data is represented by vectors. So, instead of observations and features, we talk about tuples of (real) numbers.
Vectors have two special functions defined on them: their norms and inner products. Norms simply describe their magnitude, while inner products describe
.
.
.
well, a 𝐥𝐨𝐭 of things.