A concise guide from zero to one. 100% knowledge, 0% fluff. 🠓
1/13
Functions, the central objects of mathematics and computer science, are just mappings of inputs to outputs.
A convenient (albeit quite imprecise) way to define them is to describe their effect. An explicit formula is often available, which we can translate to code.
2/13
However, giving an explicit formula is not always easy or possible.
For instance, can you calculate the number of ways we can order a deck of n cards by shuffling its cards?
3/13
There is a solution besides giving a formula.
Suppose that we shuffled n-1 cards. Given a new one, we can insert it into the deck at n possible locations.
Thus, all the possible shuffles of n can be obtained by shuffling n-1 cards first, then inserting the remaining one.
4/13
Counting this way gives rise to a formula that references itself. This is called recursion.
For the computation to end, we have to supply a so-called boundary condition. In our case, this is simple: a "deck" consisting of 1 card can be shuffled only one way.
5/13
Every recursion has two crucial components: the recursive step and the boundary condition.
6/13
In practice, we can simply implement recursive functions by calling the function in its definition. Most programming languages support this.
Frequently, recursion is an extremely convenient way to write clear and concise functions.
7/13
However, recursion is a double-edged sword.
Let's talk about a case where the recursive step involves referencing the function multiple times.
The famous Fibonacci numbers provide an example of this.
8/13
Just like previously, we can easily supply a recursive function to compute the n-th Fibonacci number.
Can you think of any potential issues?
9/13
For each call, the function calls itself two times. Those make an additional two calls individually, and so on.
This is how the recursive calls look for n = 4. (Each arrow represents a function call.)
10/13
As you probably figured, this can blow up really fast.
Essentially, computing F(n) this way involves an exponential number of calls to F.
11/13
Just out of curiosity, I have measured the time it takes to compute a few Fibonacci numbers with the recursive function.
F(40) took more than 30 seconds on my computer. I had no patience to wait out F(50).
So, recursion can be really slow.
12/13
TL;DR: a recursive function is one that references itself in its definition. They are powerful, but can be really slow.
Can you think of a better way to implement the computation of the Fibonacci numbers? Share your ideas below! (I can think of at least three.)
13/13
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The Law of Large Numbers is one of the most frequently misunderstood concepts of probability and statistics.
Just because you lost ten blackjack games in a row, it doesn’t mean that you’ll be more likely to be lucky next time.
What is the law of large numbers, then? Read on:
The strength of probability theory lies in its ability to translate complex random phenomena into coin tosses, dice rolls, and other simple experiments.
So, let’s stick with coin tossing.
What will the average number of heads be if we toss a coin, say, a thousand times?
To mathematically formalize this question, we’ll need random variables.
Tossing a fair coin is described by the Bernoulli distribution, so let X₁, X₂, … be such independent and identically distributed random variables.
Matrix factorizations are the pinnacle results of linear algebra.
From theory to applications, they are behind many theorems, algorithms, and methods. However, it is easy to get lost in the vast jungle of decompositions.
This is how to make sense of them.
We are going to study three matrix factorizations:
1. the LU decomposition, 2. the QR decomposition, 3. and the Singular Value Decomposition (SVD).
First, we'll take a look at LU.
1. The LU decomposition.
Let's start at the very beginning: linear equation systems.
Linear equations are surprisingly effective in modeling real-life phenomena: economic processes, biochemical systems, etc.