Tivadar Danka Profile picture
May 9, 2023 14 tweets 5 min read Read on X
Matrices + the Gram-Schmidt process = magic.

This magic is called the QR decomposition, and it's behind the famous eigenvalue-finding QR algorithm.

Here is how it works. Image
In essence, the QR decomposition factors an arbitrary matrix into the product of an orthogonal and an upper triangular matrix.

(We’ll illustrate everything with the 3 x 3 case, but everything works as is in general as well.)
First, some notations. Every matrix can be thought of as a sequence of column vectors. Trust me, this simple observation is the foundation of many-many Eureka-moments in mathematics. Image
Why is this useful? Because this way, we can look at matrix multiplication as a linear combination of the columns.

Check out how matrix-vector multiplication looks from this angle. (You can easily work this out by hand if you don’t believe me.) Image
In other words, a matrix times a vector equals a linear combination of the column vectors.

Similarly, the product of two matrices can be written in terms of linear combinations. Image
So, what’s the magic behind the QR decomposition? Simple: the vectorized version of the Gram-Schmidt process.

In a nutshell, the Gram-Schmidt process takes a linearly independent set of vectors and returns an orthonormal set that progressively generates the same subspaces. Image
(If you are not familiar with the Gram-Schmidt process, check out my earlier thread, where I explain everything in detail.)

The output vectors of the Gram-Schmidt process (qᵢ) can be written as the linear combination of the input vectors (aᵢ). Image
In other words, using the column vector form of matrix multiplication, we obtain that in fact, A factors into the product of two matrices. Image
As you can see, one term is formed from the Gram-Schmidt process’ output vectors (qᵢ), while the other one is upper triangular.

However, the matrix of qᵢ-s is also special: as its columns are orthonormal, its inverse is its transpose. Such matrices are called orthogonal. Image
Thus, any matrix can be written as the product of an orthogonal and an upper triangular one, which is the famous QR decomposition. Image
When is this useful for us? For one, it is used to iteratively find the eigenvalues of matrices. This is called the QR algorithm, one of the top 10 algorithms of the 20th century.

computer.org/csdl/magazine/…
This explanation is also a part of my Mathematics of Machine Learning book.

It's for engineers, scientists, and other curious minds. Explaining math like your teachers should have, but probably never did. Check out the early access!

tivadardanka.com/books/mathemat…
If you have enjoyed this thread, share it with your friends and follow me!

I regularly post deep-dive explainers about mathematics and machine learning such as this.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

Jun 30
In calculus, going from a single variable to millions of variables is hard.

Understanding the three main types of functions helps make sense of multivariable calculus.

Surprisingly, they share a deep connection. Let's see why! Image
In general, a function assigns elements of one set to another.

This is too abstract for most engineering applications. Let's zoom in a little! Image
As our measurements are often real numbers, we prefer functions that operate on real vectors or scalars.

There are three categories:

1. vector-scalar,
2. vector-vector,
3. and scalar-vector. Image
Read 16 tweets
Jun 30
Neural networks are stunningly powerful.

This is old news: deep learning is state-of-the-art in many fields, like computer vision and natural language processing. (But not everywhere.)

Why are neural networks so effective? I'll explain. Image
First, let's formulate the classical supervised learning task!

Suppose that we have a dataset D, where xₖ is a data point and yₖ is the ground truth. Image
The task is simply to find a function g(x) for which

• g(xₖ) is approximately yₖ,
• and g(x) is computationally feasible.

To achieve this, we fix a parametrized family of functions. For instance, linear regression uses this function family: Image
Read 19 tweets
Jun 28
One major reason why mathematics is considered difficult: proofs.

Reading and writing proofs are hard, but you cannot get away without them. The best way to learn is to do.

So, let's deconstruct the proof of the most famous mathematical result: the Pythagorean theorem. Image
Here it is in its full glory.

Theorem. (The Pythagorean theorem.) Let ABC be a right triangle, let a and b be the length of its two legs, and let c be the length of its hypotenuse.

Then a² + b² = c². Image
Now, the proof. Mathematical proofs often feel like pulling a rabbit out of a hat. I’ll go a bit overboard and start by pulling out two rabbits.

The first rabbit. Take a look at the following picture.

The depicted square’s side is a + b long, so its area is (a + b)². Image
Read 19 tweets
Jun 26
Problem-solving is at least 50% of every job in tech and science.

Mastering problem-solving will make your technical skill level shoot up like a hockey stick. Yet, we are rarely taught how to do so.

Here are my favorite techniques that'll loosen even the most complex knots: Image
0. Is the problem solved yet?

The simplest way to solve a problem is to look for the solution elsewhere. This is not cheating; this is pragmatism. (Except if it is a practice problem. Then, it is cheating.)
When your objective is to move fast, this should be the first thing you attempt.

This is the reason why Stack Overflow (and its likes) are the best friends of every programmer.
Read 18 tweets
Jun 25
What you see below is one of the most beautiful formulas in mathematics.

A single equation, establishing a relation between 𝑒, π, the imaginary number, and 1. It is mind-blowing.

This is what's behind the sorcery: Image
First, let's go back to square one: differentiation.

The derivative of a function at a given point describes the slope of its tangent plane. Image
By definition, the derivative is the limit of difference quotients: slopes of line segments that get closer and closer to the tangent.

These quantities are called "difference quotients". Image
Read 20 tweets
Jun 24
"Probability is the logic of science."

There is a deep truth behind this conventional wisdom: probability is the mathematical extension of logic, augmenting our reasoning toolkit with the concept of uncertainty.

In-depth exploration of probabilistic thinking incoming. Image
Our journey ahead has three stops:

1. an introduction to mathematical logic,
2. a touch of elementary set theory,
3. and finally, understanding probabilistic thinking.

First things first: mathematical logic.
In logic, we work with propositions.

A proposition is a statement that is either true or false, like

• "it's raining outside",
• "the sidewalk is wet".

These are often abbreviated as variables, such as A = "it's raining outside".
Read 29 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(