Tivadar Danka Profile picture
Aug 23, 2021 9 tweets 3 min read Read on X
Machine learning is more than function fitting.

Even though most of us are introduced to the subject through this example, fitting functions to a training dataset seemingly doesn't give us any deep insight about the data.

This is what's working behind the scenes!

🧵 👇🏽
Consider a simple example: predicting the value 𝑦 from the observation 𝑥; for instance 𝑦-s are real estate prices based on the square footage 𝑥.

If you are a visual person, this is how you can imagine such dataset.
The first thing one would do is to fit a linear function 𝑓(𝑥) = 𝑎𝑥 + 𝑏 on the data.

By looking at the result, we can see that something is not right. Sure, it might capture the mean value for a given observation, but the variance and the noise in the data is not explained.
Next, we might try to fit a more expressive function, say a polynomial, but that only seems to make things worse by potentially overfitting on the training dataset.

We need an entirely different model to really explain the dataset.
This is where probabilities come in.

Instead of a deterministic function, we estimate the probability distribution of the observations.

If 𝑋 is the distribution of our data and 𝑌 is the corresponding observation, we can model their relation with a Gaussian distribution.
We can fit this model by maximizing the likelihood function.

Essentially, for a given set of parameters 𝑎 and 𝑏, the likelihood describes the probability that we observe the training data.

The higher it is, the better the model fits.
It turns out that maximizing the likelihood is the same as minimizing the Mean Squared Loss!

(Don't worry about the computational details yet, I'll have you covered soon.)
The result?

A probabilistic model that explains the entire dataset, not just its mean. Every time you fit a linear regressor, probability and statistics are working in the background.
Of course, there is much more behind the surface.

If you are interested in the technical details, check out my post below!

(All mathematical prerequisites on probability are covered.)

tivadardanka.com/blog/the-stati…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

Oct 20
In calculus, going from a single variable to millions of variables is hard.

Understanding the three main types of functions helps make sense of multivariable calculus.

Surprisingly, they share a deep connection. Let's see why: Image
In general, a function assigns elements of one set to another.

This is too abstract for most engineering applications. Let's zoom in a little! Image
As our measurements are often real numbers, we prefer functions that operate on real vectors or scalars.

There are three categories:

1. vector-scalar,
2. vector-vector,
3. and scalar-vector. Image
Read 16 tweets
Oct 19
The Law of Large Numbers is one of the most frequently misunderstood concepts of probability and statistics.

Just because you lost ten blackjack games in a row, it doesn’t mean that you’ll be more likely to be lucky next time.

What is the law of large numbers, then? Read on: Image
The strength of probability theory lies in its ability to translate complex random phenomena into coin tosses, dice rolls, and other simple experiments.

So, let’s stick with coin tossing.

What will the average number of heads be if we toss a coin, say, a thousand times?
To mathematically formalize this question, we’ll need random variables.

Tossing a fair coin is described by the Bernoulli distribution, so let X₁, X₂, … be such independent and identically distributed random variables. Image
Read 17 tweets
Oct 15
I have spent at least 50% of my life studying, practicing, and teaching mathematics.

The most common misconceptions I encounter:

• Mathematics is useless
• You must be good with numbers
• You must be talented to do math

These are all wrong. Here's what math is really about: Image
Let's start with a story.

There’s a reason why the best ideas come during showers or walks. They allow the mind to wander freely, unchained from the restraints of focus.

One particular example is graph theory, born from the regular daily walks of the legendary Leonhard Euler.
Here is the map of Königsberg (now known as Kaliningrad, Russia), where these famous walks took place.

This part of the city is interrupted by several rivers and bridges.

(I cheated a little and drew the bridges that were there in Euler's time, but not now). Image
Read 15 tweets
Oct 14
In machine learning, we use the dot product every day.

However, its definition is far from revealing. For instance, what does it have to do with similarity?

There is a beautiful geometric explanation behind: Image
By definition, the dot product (or inner product) of two vectors is defined by the sum of coordinate products. Image
To peek behind the curtain, there are three key properties that we have to understand.

First, the dot product is linear in both variables. This property is called bilinearity. Image
Read 15 tweets
Oct 13
Matrix factorizations are the pinnacle results of linear algebra.

From theory to applications, they are behind many theorems, algorithms, and methods. However, it is easy to get lost in the vast jungle of decompositions.

This is how to make sense of them. Image
We are going to study three matrix factorizations:

1. the LU decomposition,
2. the QR decomposition,
3. and the Singular Value Decomposition (SVD).

First, we'll take a look at LU.
1. The LU decomposition.

Let's start at the very beginning: linear equation systems.

Linear equations are surprisingly effective in modeling real-life phenomena: economic processes, biochemical systems, etc. Image
Read 18 tweets
Oct 11
Behold one of the mightiest tools in mathematics: the camel principle.

I am dead serious. Deep down, this tiny rule is the cog in many methods. Ones that you use every day.

Here is what it is, how it works, and why it is essential: Image
First, the story:

The old Arab passes away, leaving half of his fortune to his eldest son, third to his middle son, and ninth to his smallest.

Upon opening the stable, they realize that the old man had 17 camels. Image
This is a problem, as they cannot split 17 camels into 1/2, 1/3, and 1/9 without cutting some in half.

So, they turn to the wise neighbor for advice. Image
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(