The number of atoms in the observable universe is approximately 10^80.
Randomly smashing my keyboard and reproducing this tweet has a chance of 1 in 256^280. This is infinitesimally small. Yet, if I keep trying long enough, it will happen with probability 1.
Let me explain why.
First things first. If we generate a random string of 5 characters, what is the probability of getting "hello"?
Assuming only ASCII characters, we have 256 options in total.
Thus, each character has a 1/256 probability of hitting the right one.
If we are truly random in our selections, each choice is independent of the others.
Thus, hitting every character has a probability of (1/256)^5.
There is a deep truth behind this conventional wisdom: probability is the mathematical extension of logic, augmenting our reasoning toolkit with the concept of uncertainty.
In-depth exploration of probabilistic thinking incoming.
Our journey ahead has three stops:
1. an introduction to mathematical logic, 2. a touch of elementary set theory, 3. and finally, understanding probabilistic thinking.
First things first: mathematical logic.
In logic, we work with propositions.
A proposition is a statement that is either true or false, like
• "it's raining outside",
• "the sidewalk is wet".
These are often abbreviated as variables, such as A = "it's raining outside".
The single biggest argument about statistics: is probability frequentist or Bayesian?
It's both, and I'll explain why.
Buckle up. Deep-dive thread below.
First, let's look at how probability behaves.
Probability quantitatively measures the likelihood of events, like rolling six with a dice. It's a number between zero and one. This is independent of interpretation.
In the language of mathematics, the events are formalized by sets within an event space. (The event space is also a set.)