Tivadar Danka Profile picture
Jul 19 15 tweets 5 min read Read on X
A question we never ask:

"How large is that number in the Law of Large Numbers?"

Sometimes, a thousand samples are large enough. Sometimes, even ten million samples fall short.

How do we know? I'll explain. Image
First things first: the law of large numbers (LLN).

Roughly speaking, it states that the averages of independent, identically distributed samples converge to the expected value, given that the number of samples grows to infinity.

We are going to dig deeper. Image
There are two kinds of LLN-s: weak and strong.

The weak law makes a probabilistic statement about the sample averages: it implies that the probability of "the sample average falling farther from the expected value than ε" goes to zero for any ε.

Let's unpack this. Image
The quantity P(|X̅ₙ - μ| > ε) might be hard to grasp for the first time; but it just measures the distance of the sample mean from the true mean (that is, the expected value) in a probabilistic sense. Image
The smaller ε is, the larger the probabilistic distance. Image
Loosely speaking, the weak LLN means that the sample average equals the true average plus a distribution that gets more and more concentrated to zero.

In other terms, we have an asymptotic expansion!

Well, sort of. In the distributional sense, at least. Image
(You might be familiar with the small and big O notation; it’s the same but with probability distributions.

The term o(1) indicates a distribution that gets more and more concentrated to zero as n grows.

This is not precise, but we'll let that slide for the sake of simplicity.)
Does this asymptotic expansion tell us why we sometimes need tens of millions of samples, when a thousand seems to be enough on other occasions?

No. We have to go deeper.

Meet the Central Limit Theorem.
The central limit theorem (CLT) states that in a distributional sense, the √n-scaled centered sample averages converge to the standard normal distribution.

(The notion “centered” means that we subtract the expected value.) Image
Let’s unpack it: in terms of an asymptotic expansion, the Law of Large Numbers and the Central Limit Theorem imply that the sample average equals the sum of

1) the expected value μ,
2) a scaled normal distribution,
3) and a distribution that vanishes faster than 1/√n. Image
This expansion can be written in a simpler form by amalgamating the constants into the normal distribution.

More precisely, this is how the normal distribution behaves with respect to scaling: Image
Thus, our asymptotic expansion takes the following form.

In other words, for large n, the sample average approximately equals a normal distribution with variance σ²/n. Image
The larger the n, the smaller the variance; the smaller the variance, the more the normal distribution is concentrated around the expected value μ.

This is why sometimes one million samples are not enough.

Larger variance ⇒ more samples. Image
This post has been a collaboration with @levikul09, one of my favorite technical writers here.

Check out the full version:

thepalindrome.org/p/how-large-th…
If you liked this thread, you will love The Palindrome, my weekly newsletter on Mathematics and Machine Learning.

Join 19,000+ curious readers here: thepalindrome.org

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

Jul 18
The single biggest argument about statistics: is probability frequentist or Bayesian?

It's neither, and I'll explain why.

Buckle up. Deep-dive explanation incoming. Image
First, let's look at what is probability.

Probability quantitatively measures the likelihood of events, like rolling six with a dice. It's a number between zero and one. This is independent of interpretation; it’s a rule set in stone. Image
In the language of probability theory, the events are formalized by sets within an event space.

The event space is also a set, usually denoted by Ω.) Image
Read 33 tweets
Jul 16
You have probably seen the famous bell curve hundreds of times before.

It is often referred to as some sort of “probability”. Contrary to popular belief, this is NOT a probability, but a probability density.

What are densities and why do we need them? Image
First, let's talk about probability.

The gist is, probability is a function P(A) that takes an event (that is, a set), and returns a real number between 0 and 1.

The event is a subset of the so-called sample space, a set often denoted with the capital Greek omega (Ω). Image
Every probability measure must satisfy three conditions: nonnegativity, additivity, and the probability of the entire sample space must be 1.

These are called the Kolmogorov axioms of probability, named after Andrey Kolmogorov, who first formalized them. Image
Read 21 tweets
Jul 15
If it is raining, the sidewalk is wet.

If the sidewalk is wet, is it raining? Not necessarily. Yet, we are inclined to think so. This is a preposterously common logical fallacy called "affirming the consequent".

However, it is not totally wrong. Why? Enter the Bayes theorem. Image
Propositions of the form "if A, then B" are called implications.

They are written as "A → B", and they form the bulk of our scientific knowledge.

Say, "if X is a closed system, then the entropy of X cannot decrease" is the 2nd law of thermodynamics.
In the implication A → B, the proposition A is called "premise", while B is called the "conclusion".

The premise implies the conclusion, but not the other way around.

If you observe a wet sidewalk, it is not necessarily raining. Someone might have spilled a barrel of water.
Read 9 tweets
Jul 14
"Probability is the logic of science."

There is a deep truth behind this conventional wisdom: probability is the mathematical extension of logic, augmenting our reasoning toolkit with the concept of uncertainty.

In-depth exploration of probabilistic thinking incoming. Image
Our journey ahead has three stops:

1. an introduction to mathematical logic,
2. a touch of elementary set theory,
3. and finally, understanding probabilistic thinking.

First things first: mathematical logic.
In logic, we work with propositions.

A proposition is a statement that is either true or false, like
• "it's raining outside",
• or "the sidewalk is wet".

These are often abbreviated as variables, such as A = "it's raining outside".
Read 28 tweets
Jul 13
Conditional probability is the single most important concept in statistics.

Why? Because without accounting for prior information, predictive models are useless.

Here is what conditional probability is, and why it is essential. Image
Conditional probability allows us to update our models by incorporating new observations.

By definition, P(B | A) describes the probability of an event B, given that A has occurred. Image
Here is an example. Suppose that among 100 emails, 30 are spam.

Based only on this information, if we inspect a random email, our best guess is a 30% chance of it being a spam.

This is not good enough. Image
Read 10 tweets
Jul 11
Most people think math is just numbers.

But after 20 years with it, I see it more like a mirror.

Here are 10 surprising lessons math taught me about life, work, and thinking clearly: Image
1. Breaking the rules is often the best course of action.

We have set theory because Bertrand Russell broke the notion that “sets are just collections of things.”
2. You have to understand the rules to successfully break them.

Miles Davis said, “Once is a mistake, twice is jazz.”

Mistakes are easy to make. Jazz is hard.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(