The following multiplication method makes everybody wish they had been taught math like this in school.
It's not just a cute visual tool: it illuminates how and why long multiplication works.
Here is the full story:
First, the method.
The first operand (21 in our case) is represented by two groups of lines: two lines in the first (1st digit), and one in the second (2nd digit).
One group for each digit.
Similarly, the second operand (32) is encoded with two groups of lines, one for each digit.
These lines are perpendicular to the previous ones.
Now comes the magic.
Count the intersections among the lines. Turns out that they correspond to the digits of the product 21 · 32.
What is this sorcery?
Let’s decompose the operands into tens and ones before multiplying them together.
By carrying out the product term by term, we are doing the same thing!
Here it is, visualized on our line representation.
There’s more. How do we multiply 21 · 32 by hand?
First, we calculate 21 · 30 = 630, then 21 · 2 = 42, which we sum up to get 21 · 32 = 672.
We learn this at elementary school like a cookbook recipe: we don’t learn the why, just the how.
Why is this relevant?
Because this is exactly what happens with the Japanese multiplication method!
Check this out one more time.
What’s the lesson here?
That visual and algebraic thinking go hand in hand. The Japanese method neatly illustrates how multiplication works, but with the algebra behind it, we feel the pulse of long multiplication.
We are not just mere users; we see behind the curtain now.
Most machine learning practitioners don’t understand the math behind their models.
That's why I've created a FREE roadmap so you can master the 3 main topics you'll ever need: algebra, calculus, and probabilities.
If the sidewalk is wet, is it raining? Not necessarily. Yet, we are inclined to think so. This is a common logical fallacy called "affirming the consequent".
However, it is not entirely wrong. Why? Enter the Bayes theorem:
Propositions of the form "if A, then B" are called implications.
They are written as "A → B", and they form the bulk of our scientific knowledge.
Say, "if X is a closed system, then the entropy of X cannot decrease" is the 2nd law of thermodynamics.
In the implication A → B, the proposition A is called "premise", while B is called the "conclusion".
The premise implies the conclusion, but not the other way around.
If you observe a wet sidewalk, it is not necessarily raining. Someone might have spilled a barrel of water.
There is a non-recursive formula for the Fibonacci numbers, expressing them in terms of the golden ratio and its powers.
Why should you be interested? Because it teaches an extremely valuable lesson about power series.
Read on to find out what:
The Fibonacci numbers form one of the most famous integer sequences, known for their intimate connection to the golden ratio, sunflower spirals, mating habits of rabbits, and several other things.
By definition, they are defined by a simple second-order recursion:
What’s usually not known is that the Fibonacci numbers have a simple and beautiful closed-form expression, written in terms of the golden ratio.
This is called the Binet formula.
In this thread, we are going to derive it from the first principles.
The Law of Large Numbers is one of the most frequently misunderstood concepts of probability and statistics.
Just because you lost ten blackjack games in a row, it doesn’t mean that you’ll be more likely to be lucky next time.
What is the law of large numbers, then? Read on:
The strength of probability theory lies in its ability to translate complex random phenomena into coin tosses, dice rolls, and other simple experiments.
So, let’s stick with coin tossing.
What will the average number of heads be if we toss a coin, say, a thousand times?
To mathematically formalize this question, we’ll need random variables.
Tossing a fair coin is described by the Bernoulli distribution, so let X₁, X₂, … be such independent and identically distributed random variables.
In machine learning, we take gradient descent for granted.
We rarely question why it works.
What's usually told is the mountain-climbing analogue: to find the valley, step towards the steepest descent.
But why does this work so well? Read on:
Our journey is leading through:
• Differentiation, as the rate of change
• The basics of differential equations
• And equilibrium states
Buckle up!
Deep dive into the beautiful world of dynamical systems incoming.
First, let's talk about derivatives and their mechanical interpretation!
Suppose that the position of an object at time t is given by the function x(t), and for simplicity, assume that it is moving along a straight line — as the distance-time plot illustrates below.