If the sidewalk is wet, is it raining? Not necessarily. Yet, we are inclined to think so. This is a common logical fallacy called "affirming the consequent".
However, it is not entirely wrong. Why? Enter the Bayes theorem:
Propositions of the form "if A, then B" are called implications.
They are written as "A → B", and they form the bulk of our scientific knowledge.
Say, "if X is a closed system, then the entropy of X cannot decrease" is the 2nd law of thermodynamics.
In the implication A → B, the proposition A is called "premise", while B is called the "conclusion".
The premise implies the conclusion, but not the other way around.
If you observe a wet sidewalk, it is not necessarily raining. Someone might have spilled a barrel of water.
Let's talk about probability!
Probability is an extension of classical logic, where the analogue of implication is the conditional probability.
The closer P(B | A) to 1, the more likely B (the conclusion) becomes when observing A (the premise).
The Bayes theorem expresses P(A | B), the likelihood of the premise given that the conclusion is observed.
What's best: it relates P(A | B) to P(B | A). That is, it tells us if we can "affirm the consequent" or not!
Suppose that the conclusion B undoubtedly follows from the premise A. (That is, P(B |A) = 1.)
How likely is the other way around? The Bayes theorem gives us an answer.
Thus, when we take a glimpse at the sidewalk outside and see that it is soaking wet, it is safe to assume that it's raining.
The other explanations are fairly rare.
Most machine learning practitioners don’t understand the math behind their models.
That's why I've created a FREE roadmap so you can master the 3 main topics you'll ever need: algebra, calculus, and probabilities.
There is a non-recursive formula for the Fibonacci numbers, expressing them in terms of the golden ratio and its powers.
Why should you be interested? Because it teaches an extremely valuable lesson about power series.
Read on to find out what:
The Fibonacci numbers form one of the most famous integer sequences, known for their intimate connection to the golden ratio, sunflower spirals, mating habits of rabbits, and several other things.
By definition, they are defined by a simple second-order recursion:
What’s usually not known is that the Fibonacci numbers have a simple and beautiful closed-form expression, written in terms of the golden ratio.
This is called the Binet formula.
In this thread, we are going to derive it from the first principles.
The Law of Large Numbers is one of the most frequently misunderstood concepts of probability and statistics.
Just because you lost ten blackjack games in a row, it doesn’t mean that you’ll be more likely to be lucky next time.
What is the law of large numbers, then? Read on:
The strength of probability theory lies in its ability to translate complex random phenomena into coin tosses, dice rolls, and other simple experiments.
So, let’s stick with coin tossing.
What will the average number of heads be if we toss a coin, say, a thousand times?
To mathematically formalize this question, we’ll need random variables.
Tossing a fair coin is described by the Bernoulli distribution, so let X₁, X₂, … be such independent and identically distributed random variables.
In machine learning, we take gradient descent for granted.
We rarely question why it works.
What's usually told is the mountain-climbing analogue: to find the valley, step towards the steepest descent.
But why does this work so well? Read on:
Our journey is leading through:
• Differentiation, as the rate of change
• The basics of differential equations
• And equilibrium states
Buckle up!
Deep dive into the beautiful world of dynamical systems incoming.
First, let's talk about derivatives and their mechanical interpretation!
Suppose that the position of an object at time t is given by the function x(t), and for simplicity, assume that it is moving along a straight line — as the distance-time plot illustrates below.