One of my favorite formulas is the closed-form of the geometric series.
I am amazed by its ubiquity: whether we are solving basic problems or pushing the boundaries of science, the geometric series often makes an appearance.
Here is how to derive it from first principles:
Let’s start with the basics: like any other series, the geometric series is the limit of its partial sums.
Our task is to find that limit.
There is an issue: the number of terms depends on N.
Thus, we can’t take the limit term by term.
The trick is to notice that multiplying the partial sums by (-q) yields a polynomial that can be used to eliminate all but two terms.
Adding them together yields a simple and manageable expression for the partial sums.
I know, this feels like pulling a rabbit from a hat.
Trust me, after you have seen this trick a few times, it’ll feel like second nature. The result is called a telescopic sum.
Thus, the partial sums are significantly simpler now.
We are almost done.
Before we study the limit of partial sums, let’s focus on qᴺ.
Its limiting behavior (as N goes to ∞) is quite simple:
With this, we are ready to put all the pieces together.
The geometric series is convergent for all |q| < 1, with a nice and simple closed-form expression as the cherry on top.
This can be beautifully visualized in the case of q = 1/2.
Where does the geometric series appear?
For instance, when deriving a closed-form expression for the Fibonacci numbers. Or, tossing coins ad infinitum.
This simple formula is one of the building blocks of mathematics, and it should be under the belt of anyone interested in looking behind the curtain of science, engineering, and mathematics.
Most machine learning practitioners don’t understand the math behind their models.
That's why I've created a FREE roadmap so you can master the 3 main topics you'll ever need: algebra, calculus, and probabilities.
If the sidewalk is wet, is it raining? Not necessarily. Yet, we are inclined to think so. This is a common logical fallacy called "affirming the consequent".
However, it is not entirely wrong. Why? Enter the Bayes theorem:
Propositions of the form "if A, then B" are called implications.
They are written as "A → B", and they form the bulk of our scientific knowledge.
Say, "if X is a closed system, then the entropy of X cannot decrease" is the 2nd law of thermodynamics.
In the implication A → B, the proposition A is called "premise", while B is called the "conclusion".
The premise implies the conclusion, but not the other way around.
If you observe a wet sidewalk, it is not necessarily raining. Someone might have spilled a barrel of water.
There is a non-recursive formula for the Fibonacci numbers, expressing them in terms of the golden ratio and its powers.
Why should you be interested? Because it teaches an extremely valuable lesson about power series.
Read on to find out what:
The Fibonacci numbers form one of the most famous integer sequences, known for their intimate connection to the golden ratio, sunflower spirals, mating habits of rabbits, and several other things.
By definition, they are defined by a simple second-order recursion:
What’s usually not known is that the Fibonacci numbers have a simple and beautiful closed-form expression, written in terms of the golden ratio.
This is called the Binet formula.
In this thread, we are going to derive it from the first principles.
The Law of Large Numbers is one of the most frequently misunderstood concepts of probability and statistics.
Just because you lost ten blackjack games in a row, it doesn’t mean that you’ll be more likely to be lucky next time.
What is the law of large numbers, then? Read on:
The strength of probability theory lies in its ability to translate complex random phenomena into coin tosses, dice rolls, and other simple experiments.
So, let’s stick with coin tossing.
What will the average number of heads be if we toss a coin, say, a thousand times?
To mathematically formalize this question, we’ll need random variables.
Tossing a fair coin is described by the Bernoulli distribution, so let X₁, X₂, … be such independent and identically distributed random variables.