Tivadar Danka Profile picture
Apr 22 15 tweets 5 min read Twitter logo Read on Twitter
I described some of the most beautiful and famous mathematical theorems to Midjourney.

Here is how it imagined them:

1. "The set of real numbers is uncountably infinite." Image
2. The Baire category theorem: "In a complete metric space, the intersection of countably many dense sets remains dense." Image
3. Zorn's lemma: "A partially ordered set containing upper bounds for every chain necessarily contains at least one maximal element." Image
4. The fundamental theorem of calculus: "The integral of a function's derivative recovers the original function, up to a constant." Image
5. The Banach-Tarski paradox: "Decomposing a solid sphere into a finite number of disjoint subsets, and then reassembling those subsets to create two spheres identical to the original one." Image
6. "Every vector space has a Hamel basis." Image
7. The fundamental theorem of algebra: "Every non - constant polynomial equation has at least one complex root." Image
8. Gödel's incompleteness theorems: "In any formal system of axioms, there are true statements that cannot be proven within the system and the consistency of the system cannot be proven by its own axioms." Image
9. The fundamental theorem of arithmetic: "Every positive integer greater than 1 can be represented uniquely as a product of prime numbers." Image
10. Brouwer's fixed point theorem: "In any continuous transformation of a compact, convex set in Euclidean space, there is at least one point that remains fixed." Image
11. The central limit theorem: "The sum of a large number of independent and identically distributed random variables will be approximately normally distributed, regardless of the original distribution." Image
12. The Heine-Borel theorem: "The compact subsets of Euclidean space are precisely those that are closed and bounded." Image
13. The singular value decomposition: "Every matrix can be decomposed into the product of a unitary, a diagonal, and another unitary matrix." Image
14. Bonus: "The set of real numbers is uncountably infinite, in the style of Salvador Dali." Image
If you have enjoyed this thread, share it with your friends and give me a follow!

This is not my typical content: I usually post math explainers here. However, this was my first time trying out Midjourney. Now, I am hooked.

(Don't worry, I won't go into "AI influencer" mode.)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

Apr 21
The Gram-Schmidt process is one of the most important algorithms in linear algebra.

Its task is simple: orthogonalizing vector sets.
Its applications are endless: matrix decompositions, eigenvalue problems, numerical linear algebra...

This is how it works: Image
The problem is simple. We are given a set of basis vectors

a₁, a₂, …, aₙ,

and we want to turn them into an orthogonal basis

q₁, q₂, …, qₙ,

such that each qᵢ-s represent the same information as aᵢ. Image
How do we achieve such a result? One step at a time.

Let’s look at an example! Our input consists of three highly correlated but still independent three-dimensional vectors. Image
Read 12 tweets
Apr 20
The Gram-Schmidt process is one of the most important algorithms in linear algebra.

Its task is simple: orthogonalizing vector sets.
Its applications are endless: matrix decompositions, eigenvalue problems, numerical linear algebra...

This is how it works: Image
The problem is simple. We are given a set of basis vectors

a₁, a₂, …, aₙ,

and we want to turn them into an orthogonal basis

q₁, q₂, …, qₙ,

such that each qᵢ-s represent the same information as aᵢ. Image
How do we achieve such a result? One step at a time.

Let’s look at an example! Our input consists of three highly correlated but still independent three-dimensional vectors. Image
Read 14 tweets
Apr 12
Here is a probabilistic puzzle.

Feedex and Acme are two delivery companies. Feedex trains are 80% on time, while only 40% of its trucks are.

However, Acme's trains are 100% on time, and 60% of its trucks are as well.

Yet, Feedex is more reliable! Why? Image
This lesson is brought to you @brilliantorg's Introduction to Probability course. Their interactive, first-principles approach will make sure you understand and retain the things you learn there.

Since I'm partnering with them, I have a special offer for you later.

Let's go! Image
As Acme dominates reliability in both categories, it seems like a better choice.

However, something is missing from the picture. What do you think it is? Image
Read 9 tweets
Apr 10
In machine learning, we take gradient descent for granted. We rarely question why it works.

What's usually told is the mountain-climbing analogue: to find the valley, step towards the steepest descent.

But why does this work so well? Read on. Image
Our journey is leading through

• differentiation, as the rate of change,
• the basics of differential equations,
• and equilibrium states.

Buckle up! Deep dive into the beautiful world of dynamical systems incoming. (Full post link at the end.)
First, let's talk about derivatives and their mechanical interpretation!

Suppose that the position of an object at time t is given by the function x(t), and for simplicity, assume that it is moving along a straight line — as the distance-time plot illustrates below. Image
Read 27 tweets
Mar 29
The single most important "side-effect" of solving linear equation systems: the LU decomposition.

Why? Because in practice, it is the engine behind inverting matrices and computing their determinants.

Here is how it works.
Why is the LU decomposition useful? There are two main applications:

• computing determinants,
• and inverting matrices.

Check out how the LU decomposition simplifies the determinant. (As the determinant of a triangular matrix is the product of the diagonal.)
We’ll demonstrate the technique in the 3 x 3 case.

Let’s go back to square one: where do matrices come from?

For one, systems of linear equations. They are used to model various phenomena ranging from economic processes to biological systems.
Read 17 tweets
Mar 28
What can go wrong will go wrong.

This is Murphy's famous First Law. Here is a probabilistic reformulation: "what can go wrong with probability 𝑝 > 0, will go wrong with probability 1". But when will it go wrong?

Surprisingly, this is encoded in the probability.
Understanding probabilistic thinking is one of the best investments you can make, and @brilliantorg's Introduction to Probability will teach you the very essence.

Since I'm partnering with them, I have a special offer from them for you later.

Let's get started!
Some events are more important than others.

For instance, if we send Endurance, the rover, to Mars, we want to be absolutely sure that failures are not likely to happen within a given timeframe.

How can probability reveal this?
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(