Apr 22 15 tweets 5 min read Read on Twitter
I described some of the most beautiful and famous mathematical theorems to Midjourney.

Here is how it imagined them:

1. "The set of real numbers is uncountably infinite."
2. The Baire category theorem: "In a complete metric space, the intersection of countably many dense sets remains dense."
3. Zorn's lemma: "A partially ordered set containing upper bounds for every chain necessarily contains at least one maximal element."
4. The fundamental theorem of calculus: "The integral of a function's derivative recovers the original function, up to a constant."
5. The Banach-Tarski paradox: "Decomposing a solid sphere into a finite number of disjoint subsets, and then reassembling those subsets to create two spheres identical to the original one."
6. "Every vector space has a Hamel basis."
7. The fundamental theorem of algebra: "Every non - constant polynomial equation has at least one complex root."
8. Gödel's incompleteness theorems: "In any formal system of axioms, there are true statements that cannot be proven within the system and the consistency of the system cannot be proven by its own axioms."
9. The fundamental theorem of arithmetic: "Every positive integer greater than 1 can be represented uniquely as a product of prime numbers."
10. Brouwer's fixed point theorem: "In any continuous transformation of a compact, convex set in Euclidean space, there is at least one point that remains fixed."
11. The central limit theorem: "The sum of a large number of independent and identically distributed random variables will be approximately normally distributed, regardless of the original distribution."
12. The Heine-Borel theorem: "The compact subsets of Euclidean space are precisely those that are closed and bounded."
13. The singular value decomposition: "Every matrix can be decomposed into the product of a unitary, a diagonal, and another unitary matrix."
14. Bonus: "The set of real numbers is uncountably infinite, in the style of Salvador Dali."
If you have enjoyed this thread, share it with your friends and give me a follow!

This is not my typical content: I usually post math explainers here. However, this was my first time trying out Midjourney. Now, I am hooked.

(Don't worry, I won't go into "AI influencer" mode.)

• • •

Missing some Tweet in this thread? You can try to force a refresh

This Thread may be Removed Anytime!

Twitter may remove this content at anytime! Save it as PDF for later use!

# More from @TivadarDanka

Apr 21
The Gram-Schmidt process is one of the most important algorithms in linear algebra.

Its task is simple: orthogonalizing vector sets.
Its applications are endless: matrix decompositions, eigenvalue problems, numerical linear algebra...

This is how it works:
The problem is simple. We are given a set of basis vectors

a₁, a₂, …, aₙ,

and we want to turn them into an orthogonal basis

q₁, q₂, …, qₙ,

such that each qᵢ-s represent the same information as aᵢ.
How do we achieve such a result? One step at a time.

Let’s look at an example! Our input consists of three highly correlated but still independent three-dimensional vectors.
Apr 20
The Gram-Schmidt process is one of the most important algorithms in linear algebra.

Its task is simple: orthogonalizing vector sets.
Its applications are endless: matrix decompositions, eigenvalue problems, numerical linear algebra...

This is how it works:
The problem is simple. We are given a set of basis vectors

a₁, a₂, …, aₙ,

and we want to turn them into an orthogonal basis

q₁, q₂, …, qₙ,

such that each qᵢ-s represent the same information as aᵢ.
How do we achieve such a result? One step at a time.

Let’s look at an example! Our input consists of three highly correlated but still independent three-dimensional vectors.
Apr 12
Here is a probabilistic puzzle.

Feedex and Acme are two delivery companies. Feedex trains are 80% on time, while only 40% of its trucks are.

However, Acme's trains are 100% on time, and 60% of its trucks are as well.

Yet, Feedex is more reliable! Why?
This lesson is brought to you @brilliantorg's Introduction to Probability course. Their interactive, first-principles approach will make sure you understand and retain the things you learn there.

Since I'm partnering with them, I have a special offer for you later.

Let's go!
As Acme dominates reliability in both categories, it seems like a better choice.

However, something is missing from the picture. What do you think it is?
Apr 10
In machine learning, we take gradient descent for granted. We rarely question why it works.

What's usually told is the mountain-climbing analogue: to find the valley, step towards the steepest descent.

But why does this work so well? Read on.
Our journey is leading through

• differentiation, as the rate of change,
• the basics of differential equations,
• and equilibrium states.

Buckle up! Deep dive into the beautiful world of dynamical systems incoming. (Full post link at the end.)
First, let's talk about derivatives and their mechanical interpretation!

Suppose that the position of an object at time t is given by the function x(t), and for simplicity, assume that it is moving along a straight line — as the distance-time plot illustrates below.
Mar 29
The single most important "side-effect" of solving linear equation systems: the LU decomposition.

Why? Because in practice, it is the engine behind inverting matrices and computing their determinants.

Here is how it works.
Why is the LU decomposition useful? There are two main applications:

• computing determinants,
• and inverting matrices.

Check out how the LU decomposition simplifies the determinant. (As the determinant of a triangular matrix is the product of the diagonal.)
We’ll demonstrate the technique in the 3 x 3 case.

Let’s go back to square one: where do matrices come from?

For one, systems of linear equations. They are used to model various phenomena ranging from economic processes to biological systems.
Mar 28
What can go wrong will go wrong.

This is Murphy's famous First Law. Here is a probabilistic reformulation: "what can go wrong with probability 𝑝 > 0, will go wrong with probability 1". But when will it go wrong?

Surprisingly, this is encoded in the probability.
Understanding probabilistic thinking is one of the best investments you can make, and @brilliantorg's Introduction to Probability will teach you the very essence.

Since I'm partnering with them, I have a special offer from them for you later.

Let's get started!
Some events are more important than others.

For instance, if we send Endurance, the rover, to Mars, we want to be absolutely sure that failures are not likely to happen within a given timeframe.

How can probability reveal this?