Tivadar Danka Profile picture
May 9 14 tweets 5 min read Twitter logo Read on Twitter
Matrices + the Gram-Schmidt process = magic.

This magic is called the QR decomposition, and it's behind the famous eigenvalue-finding QR algorithm.

Here is how it works. Image
In essence, the QR decomposition factors an arbitrary matrix into the product of an orthogonal and an upper triangular matrix.

(We’ll illustrate everything with the 3 x 3 case, but everything works as is in general as well.)
First, some notations. Every matrix can be thought of as a sequence of column vectors. Trust me, this simple observation is the foundation of many-many Eureka-moments in mathematics. Image
Why is this useful? Because this way, we can look at matrix multiplication as a linear combination of the columns.

Check out how matrix-vector multiplication looks from this angle. (You can easily work this out by hand if you don’t believe me.) Image
In other words, a matrix times a vector equals a linear combination of the column vectors.

Similarly, the product of two matrices can be written in terms of linear combinations. Image
So, what’s the magic behind the QR decomposition? Simple: the vectorized version of the Gram-Schmidt process.

In a nutshell, the Gram-Schmidt process takes a linearly independent set of vectors and returns an orthonormal set that progressively generates the same subspaces. Image
(If you are not familiar with the Gram-Schmidt process, check out my earlier thread, where I explain everything in detail.)

The output vectors of the Gram-Schmidt process (qᵢ) can be written as the linear combination of the input vectors (aᵢ). Image
In other words, using the column vector form of matrix multiplication, we obtain that in fact, A factors into the product of two matrices. Image
As you can see, one term is formed from the Gram-Schmidt process’ output vectors (qᵢ), while the other one is upper triangular.

However, the matrix of qᵢ-s is also special: as its columns are orthonormal, its inverse is its transpose. Such matrices are called orthogonal. Image
Thus, any matrix can be written as the product of an orthogonal and an upper triangular one, which is the famous QR decomposition. Image
When is this useful for us? For one, it is used to iteratively find the eigenvalues of matrices. This is called the QR algorithm, one of the top 10 algorithms of the 20th century.

computer.org/csdl/magazine/…
This explanation is also a part of my Mathematics of Machine Learning book.

It's for engineers, scientists, and other curious minds. Explaining math like your teachers should have, but probably never did. Check out the early access!

tivadardanka.com/books/mathemat…
If you have enjoyed this thread, share it with your friends and follow me!

I regularly post deep-dive explainers about mathematics and machine learning such as this.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

Apr 24
I asked Salvador Dali to imagine some of the most beautiful mathematical theorems in his breathtaking paintings. (Or maybe it was Midjourney, in the style of Dali.)

Here are the results:

1. "The set of real numbers is uncountably infinite." Image
2. The Baire category theorem: "In a complete metric space, the intersection of countably many open dense sets remains dense." Image
3. Zorn's lemma: "A partially ordered set containing upper bounds for every chain necessarily contains at least one maximal element." Image
Read 16 tweets
Apr 22
I described some of the most beautiful and famous mathematical theorems to Midjourney.

Here is how it imagined them:

1. "The set of real numbers is uncountably infinite." Image
2. The Baire category theorem: "In a complete metric space, the intersection of countably many dense sets remains dense." Image
3. Zorn's lemma: "A partially ordered set containing upper bounds for every chain necessarily contains at least one maximal element." Image
Read 15 tweets
Apr 21
The Gram-Schmidt process is one of the most important algorithms in linear algebra.

Its task is simple: orthogonalizing vector sets.
Its applications are endless: matrix decompositions, eigenvalue problems, numerical linear algebra...

This is how it works: Image
The problem is simple. We are given a set of basis vectors

a₁, a₂, …, aₙ,

and we want to turn them into an orthogonal basis

q₁, q₂, …, qₙ,

such that each qᵢ-s represent the same information as aᵢ. Image
How do we achieve such a result? One step at a time.

Let’s look at an example! Our input consists of three highly correlated but still independent three-dimensional vectors. Image
Read 12 tweets
Apr 20
The Gram-Schmidt process is one of the most important algorithms in linear algebra.

Its task is simple: orthogonalizing vector sets.
Its applications are endless: matrix decompositions, eigenvalue problems, numerical linear algebra...

This is how it works: Image
The problem is simple. We are given a set of basis vectors

a₁, a₂, …, aₙ,

and we want to turn them into an orthogonal basis

q₁, q₂, …, qₙ,

such that each qᵢ-s represent the same information as aᵢ. Image
How do we achieve such a result? One step at a time.

Let’s look at an example! Our input consists of three highly correlated but still independent three-dimensional vectors. Image
Read 14 tweets
Apr 12
Here is a probabilistic puzzle.

Feedex and Acme are two delivery companies. Feedex trains are 80% on time, while only 40% of its trucks are.

However, Acme's trains are 100% on time, and 60% of its trucks are as well.

Yet, Feedex is more reliable! Why? Image
This lesson is brought to you @brilliantorg's Introduction to Probability course. Their interactive, first-principles approach will make sure you understand and retain the things you learn there.

Since I'm partnering with them, I have a special offer for you later.

Let's go! Image
As Acme dominates reliability in both categories, it seems like a better choice.

However, something is missing from the picture. What do you think it is? Image
Read 9 tweets
Apr 10
In machine learning, we take gradient descent for granted. We rarely question why it works.

What's usually told is the mountain-climbing analogue: to find the valley, step towards the steepest descent.

But why does this work so well? Read on. Image
Our journey is leading through

• differentiation, as the rate of change,
• the basics of differential equations,
• and equilibrium states.

Buckle up! Deep dive into the beautiful world of dynamical systems incoming. (Full post link at the end.)
First, let's talk about derivatives and their mechanical interpretation!

Suppose that the position of an object at time t is given by the function x(t), and for simplicity, assume that it is moving along a straight line — as the distance-time plot illustrates below. Image
Read 27 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(