Tivadar Danka Profile picture
Sep 13, 2023 19 tweets 6 min read Read on X
Neural networks are stunningly powerful.

This is old news: deep learning is state-of-the-art in many fields, like computer vision and natural language processing. (But not everywhere.)

Why are neural networks so effective? I'll explain. Image
First, let's formulate the classical supervised learning task!

Suppose that we have a dataset D, where xₖ is a data point and yₖ is the ground truth. Image
The task is simply to find a function g(x) for which

• g(xₖ) is approximately yₖ,
• and g(x) is computationally feasible.

To achieve this, we fix a parametrized family of functions. For instance, linear regression uses this function family: Image
If we assume that there exists a true underlying function f(x) that describes the relationship between xₖ and yₖ, the problem can be phrased as a function approximation problem:

"How can we find a function from our parametric family that is as close to f(x) as possible?"
What is even approximation theory? Here's a primer.

Check out the sine function, defined in terms of a right triangle. Except for a few special cases like x = π/4, it is impossible to compute in practice. Image
Why? Because sine is a transcendental function, meaning that you cannot calculate its value with finite additions and multiplications.

However, when you punch, say, sin(2.123) into a calculator, you'll get an answer. This is done via an approximation. Image
Here it is in the case of n = 2, which is a polynomial of degree five. It is already a very good approximation, albeit only on the interval [-2, 2]. Image
Let's revisit the problem of supervised learning! Suppose that the function f(x) describes the true relation between data and observation.

f(x) is not known exactly, only for some values xₖ, where f(xₖ) = yₖ.
Our job is to find an approximating function g(x) that

1. fits the data,
2. properly generalizes to unseen samples,
3. and is computationally feasible.

In the language of approximation theory, we want a function that minimizes the so-called supremum norm. Image
The smaller || f - g || is, the better the fit. Thus, our goal is to be as small as possible.

You can imagine || f - g || by plotting these functions, coloring the area enclosed by the graph, and calculating the maximum spread of said area along the y axis. Image
Mathematically speaking, a neural network with a single hidden layer is defined by the following function.

(N is the number of neurons in the hidden layer; 𝜑 is a nonlinear function such as the Sigmoid; and the wᵢ-s and bᵢ-s are vectors, while the vᵢ-s are real numbers.) Image
For clarity, here is the graphical representation of a single-layer neural network with four hidden neurons. Image
Are single-layer neural networks expressive enough to approximate any reasonable function?

Yes. Meet the universal approximation theorem. Let's unpack it.

(Source: Cybenko, G. (1989) "Approximations by superpositions of sigmoidal functions".) Image
Step one. Fix a small number ε and draw an ε wide stripe around f(x), the function to learn. The smaller ε is, the better the result will be. Image
Step two. (The hard part.) Find a neural network that remains inside the stripe.

Is it even possible? Yes: the theorem guarantees its existence. This is why neural networks are called universal approximators. Image
Unfortunately, there are several serious issues.

1. The theorem doesn't tell us how to find such a neural network.
2. The number of neurons can be really high.
3. We can't measure the supremum norm in practice.
Thus, we can't just sit on our laurels after proving the universal approximation theorem.

The bulk of the work is ahead of us: finding a good approximation in practice; avoiding overfitting; and many others.

These are going to be our topics for another day :)
Read the full version of the thread here: thepalindrome.org/p/why-are-neur…
If you have enjoyed this thread, share it with your friends and follow me!

I regularly post deep-dive explainers about mathematics and machine learning such as this.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

Feb 19
The 10 Most Important Lessons 20 Years of Mathematics Taught Me

1. Breaking the rules is often the best course of action. Image
I can’t even count the number of math-breaking ideas that propelled science forward by light years.

We have set theory because Bertrand Russell broke the notion that “sets are just collections of things.”
We have complex numbers because Gerolamo Cardano kept the computations going when encountering √−1, refusing to acknowledge that it doesn’t exist.
Read 44 tweets
Jan 20
The single most undervalued fact of linear algebra: matrices are graphs, and graphs are matrices.

Encoding matrices as graphs is a cheat code, making complex behavior simple to study.

Let me show you how! Image
If you looked at the example above, you probably figured out the rule.

Each row is a node, and each element represents a directed and weighted edge. Edges of zero elements are omitted.

The element in the 𝑖-th row and 𝑗-th column corresponds to an edge going from 𝑖 to 𝑗.
To unwrap the definition a bit, let's check the first row, which corresponds to the edges outgoing from the first node. Image
Read 18 tweets
Jan 14
Matrix multiplication is not easy to understand.

Even looking at the definition used to make me sweat, let alone trying to comprehend the pattern. Yet, there is a stunningly simple explanation behind it.

Let's pull back the curtain! Image
First, the raw definition.

This is how the product of A and B is given. Not the easiest (or most pleasant) to look at.

We are going to unwrap this. Image
Here is a quick visualization before the technical details.

The element in the i-th row and j-th column of AB is the dot product of A's i-th row and B's j-th column. Image
Read 17 tweets
Jan 8
Behold one of the mightiest tools in mathematics: the camel principle.

I am dead serious. Deep down, this tiny rule is the cog in many methods. Ones that you use every day.

Here is what it is, how it works, and why it is essential: Image
First, the story:

The old Arab passes away, leaving half of his fortune to his eldest son, third to his middle son, and ninth to his smallest.

Upon opening the stable, they realize that the old man had 17 camels. Image
This is a problem, as they cannot split 17 camels into 1/2, 1/3, and 1/9 without cutting some in half.

So, they turn to the wise neighbor for advice. Image
Read 18 tweets
Jan 1
The single most undervalued fact of linear algebra: matrices are graphs, and graphs are matrices.

Encoding matrices as graphs is a cheat code, making complex behavior simple to study.

Let me show you how! Image
If you looked at the example above, you probably figured out the rule.

Each row is a node, and each element represents a directed and weighted edge. Edges of zero elements are omitted.

The element in the 𝑖-th row and 𝑗-th column corresponds to an edge going from 𝑖 to 𝑗.
To unwrap the definition a bit, let's check the first row, which corresponds to the edges outgoing from the first node. Image
Read 18 tweets
Dec 11, 2025
The single most undervalued fact of linear algebra: matrices are graphs, and graphs are matrices.

Encoding matrices as graphs is a cheat code, making complex behavior simple to study.

Let me show you how! Image
If you looked at the example above, you probably figured out the rule.

Each row is a node, and each element represents a directed and weighted edge. Edges of zero elements are omitted.

The element in the 𝑖-th row and 𝑗-th column corresponds to an edge going from 𝑖 to 𝑗.
To unwrap the definition a bit, let's check the first row, which corresponds to the edges outgoing from the first node. Image
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(