Santiago Profile picture
15 Sep, 12 tweets, 3 min read
Imagine I tell you this:

"The probability of a particular event happening is zero."

Contrary to what you may think, this doesn't mean that this event is impossible. In other words, events with 0 probability could still happen!

This seems contradictory. What's going on here?
Yesterday, I asked the question in the attached image.

Hundreds of people replied. Many of the answers followed the same logic:

"The probability can't be zero because that would mean that the event can't happen."

This, however, is not true.
Let's start with something that we know:

Impossible outcomes always have a probability of 0.

This means that the probability of an event that can't happen is always zero.

Makes sense. But the opposite is not necessarily true!
If we limit the space of possible outcomes to a finite number of possibilities, we can say that any outcomes with a probability of zero are impossible.

But what happens if we look into an infinite space of outcomes?
Here is an example: Draw a random real number between 0 and 1.

Possible outcomes? Infinite.

What's the probability of drawing any specific number?
Well, we know that we have ∞ possibilities, therefore:

P(drawing a number) = 1/∞

Here is where it gets interesting.

Infinity is not a number! We can't just simply do regular operations using ∞.
We can reason about 1/∞ by using limits:

lim(𝑥 → ∞) 1/𝑥 = 0

In English: "1/𝑥 tends to 0 as 𝑥 grows towards infinity." In other words, as 𝑥 grows large, our probability grows really small.

This is good, but there's something missing: what's the probability then?
To properly compute the probability in a continuous context, we need to look into probability densities.

I won't do that here, but trust me when—unsurprisingly—I simplify things by saying:

P(drawing a number) = 1/∞ = 0
Therefore, the probability of getting any specific outcome from an infinite space of possible outcomes is zero.

It doesn't mean that the probability is really small. It doesn't mean that the probability is close to zero.

It means that the probability is zero.
Whenever we are looking at an infinite space of outcomes, we say the following:

"An event with zero probability implies that the event will almost never happen."

This is accurate. Saying that the event will never happen (or is impossible) is not.
I'm not a mathematician.

This thread aims to illustrate probabilities on an infinite space of outcomes using an informal explanation.

For those looking for a more formal approach, I'd recommend reading about probability density and how it's used in a continuous context.
Every week, I post 2 or 3 threads like this, breaking down machine learning concepts and giving you ideas on applying them in real-life situations.

You can find more of these at @svpino.

If you find this helpful, stay tuned: a lot more is coming.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Santiago

Santiago Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @svpino

14 Sep
It was a different morning.

People woke up that day to an astonishing New York Times article: "New Navy Device Learns By Doing."

It was July of 1958, and for the first time, an electronic device showed the ability to learn.

It was called "Perceptron."
Frank Rosenblatt was born in New York and spent most of his life as a research psychologist.

Sleepless years of research culminated in his best-known work, which shocked the world and was billed as a revolution.

His machine, designed for image recognition, was able to learn!
Frank's ideas were the center of controversy among the AI community.

The New York Times reported about the machine:

"[the Navy] expects will be able to walk, talk, see, write, reproduce, and be conscious of its existence.

Bold claims at that time!
Read 7 tweets
9 Sep
Antoine was born in France back in 1607.

Despite not being a nobleman, he called himself "Chevalier De Méré," and spent his days as any other writer and philosopher at the time.

But the Chevalier liked gambling, and was obsessed with the probabilities surrounding the game.
One day he started losing money unexpectedly.

His choices were between:

1. Getting at least one six with four throws of a die, or

2. Getting at least one double six with 24 throws of a pair of dice?

He believed both had equal probabilities, but luck kept eluding him. 🤦
This is how Méré thought about this problem:

1. Chance of getting one six in one roll: 1/6

2. Average number in four rolls: 4(1/6) = 2/3

3. Chance of getting double six in one roll: 1/36

4. Average number in 24 rolls: 24(1/36) = 2/3

Then, why was he losing money?
Read 10 tweets
7 Sep
If you want to become a better gambler, you need to learn probabilities.

(Also useful for machine learning, but who cares about that.)

Let's talk about the basic principles of probabilities that you need to understand.
This is what we are going to cover:

The four fundamental rules of probabilities and a couple of basic concepts.

These will help you look at the world in a completely different way.

(And become a better gambler, if that's what you choose to do.)
Let's start with an example:

If you throw a die, you'll get six possible elementary outcomes.

We call the collection of possible outcomes "Sample Space."

The attached image shows the sample space of throwing a single die.
Read 34 tweets
3 Sep
Machine learning models can be extremely powerful.

But there's a catch: they are notoriously hard to optimize for any given problem. There are just too many variables that we could change.

Thread: On keeping your sanity when training a model.
I'm sure you've heard about "hyperparameters."

Think of this as "configuration settings."

Depending on the settings you choose, your model will perform differently.

Sometimes better. Sometimes worse.
Here are some of the settings that we could change when building a model:

• learning rate
• batch size
• epochs
• optimizer
• regularization

The list goes on and on.
Read 15 tweets
1 Sep
I finished my first Kaggle competition and scored in the top 4% of participants.

I learned a few valuable lessons. Here they are: ↓
Most important lesson:

@Kaggle is a firehouse of new knowledge. In around 3 weeks, I learned more than in the last 3 months combined.

It's not only about the competition, but the people and the collaboration.

If you haven't tried yet, consider it.
Kaggle is all about squeezing as much performance out of your solution as you can.

Complexity and runtime are secondary.

This is very different than real-life applications, but it forces you to learn something different and valuable.
Read 13 tweets
31 Aug
What can you do when your machine learning model stops improving?

There's always a point when you hit the ceiling and the performance of the model stalls.

Thread: A couple of tricks to improve your model.
Here is something that's keeping you from making progress:

You are using all of your data.

It turns out that more data is not always a good thing.

What would it look like to only focus on some of the data? Would that be helpful?
Here is the plan:

1. Find whether there's a portion of the data that's holding you back. Get rid of it.

2. Find whether a portion of the data is better suited for a different model.

Let's break these two apart to understand what to do.
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(