There is one big reason we love the logarithm function in machine learning.
Logarithms help us reduce complexity by turning multiplication into addition. You might not know it, but they are behind a lot of things in machine learning.
Here is the entire story.
🧵 👇🏽
First, let's start with the definition of the logarithm.
The base 𝑎 logarithm of 𝑏 is simply the solution of the equation 𝑎ˣ = 𝑏.
Despite its simplicity, it has many useful properties that we take advantage of all the time.
You can think of the logarithm as the inverse of exponentiation.
Because of this, it turns multiplication into addition. Exponentiation does the opposite: it turns addition into multiplication.
(The base is often assumed to be a fixed constant. Thus, it can be omitted.)
Why is this useful? For calculating gradients and derivatives!
Training a neural network requires finding its gradient. However, lots of commonly used functions are written in terms of products.
As you can see, this complicates things.
By taking the logarithm, we can compute the derivative as it turns products into sums.
This method is called logarithmic differentiation.
Since the logarithm is increasing, maximizing a function is the same as maximizing its logarithm. (Same with minimization.)
One example where this is useful is the maximum likelihood estimation.
Given a set of observations and a predictive model, we can write this in the following form.
Believe it or not, this is behind the mean squared error.
Every time you use this, logarithms are working in the background.
If you enjoyed this thread and want to see behind the curtain of machine learning, I am writing a book for you, where we go from high school math to neural networks, one step at a time.
The early access for Mathematics of Machine Learning is out now!
More applications of logarithms: transforming data for visualization. This is extremely useful in life sciences, where the scale of features is exponential.
"How large that number in the Law of Large Numbers is?"
Sometimes, a thousand samples are large enough. Sometimes, even ten million samples fall short.
How do we know? I'll explain.
First things first: the law of large numbers (LLN).
Roughly speaking, it states that the averages of independent, identically distributed samples converge to the expected value, given that the number of samples grows to infinity.
We are going to dig deeper.
There are two kinds of LLN-s: weak and strong.
The weak law makes a probabilistic statement about the sample averages: it implies that the probability of "the sample average falling farther from the expected value than ε" goes to zero for any ε.
The single biggest argument about statistics: is probability frequentist or Bayesian? It's neither, and I'll explain why.
Buckle up. Deep-dive explanation incoming.
First, let's look at what is probability.
Probability quantitatively measures the likelihood of events, like rolling six with a dice. It's a number between zero and one. This is independent of interpretation; it’s a rule set in stone.
In the language of probability theory, the events are formalized by sets within an event space.
(The event space is also a set, usually denoted by Ω.)