**
This Thread may be Removed Anytime!**

Twitter may remove this content at anytime! Save it as PDF for later use!

- Follow @ThreadReaderApp to mention us!
- From a Twitter thread mention us with a keyword "unroll"

`@threadreaderapp unroll`

Practice here first or read more on our help page!

One of my favorite convolutional network architectures is the U-Net.

It solves a hard problem in such an elegant way that it became one of the most performant and popular choices for semantic segmentation tasks.

How does it work?

🧵 👇🏽

It solves a hard problem in such an elegant way that it became one of the most performant and popular choices for semantic segmentation tasks.

How does it work?

🧵 👇🏽

Let's quickly recap what semantic segmentation is: a common computer vision task, where we want to classify which class each pixel belongs to.

Because we want to provide a prediction on a pixel level, this task is much harder than classification.

Because we want to provide a prediction on a pixel level, this task is much harder than classification.

Since the absolutely classic paper Fully Convolutional Networks for Semantic Segmentation by Jonathan Long, Evan Shelhamer, and Trevor Darrell, fully end-to-end autoencoder architectures were most commonly used for this.

(Image source: paper above, arxiv.org/abs/1411.4038v2)

(Image source: paper above, arxiv.org/abs/1411.4038v2)

Gradient descent sounds good on paper, but there is a big issue in practice.

For complex functions like training losses for neural networks, calculating the gradient is computationally very expensive.

What makes it possible? For one, stochastic gradient descent!

🧵 👇🏽

For complex functions like training losses for neural networks, calculating the gradient is computationally very expensive.

What makes it possible? For one, stochastic gradient descent!

🧵 👇🏽

When you have a lot of data, calculating the gradient of the loss involves the computation of a large sum.

Think about it: if 𝑥ᵢ denotes the data and 𝑤 denotes the weights, the loss function takes the form below.

Think about it: if 𝑥ᵢ denotes the data and 𝑤 denotes the weights, the loss function takes the form below.

Not only do we have to add a bunch of numbers together, but we have to find the gradient of each loss term.

For example, if the model contains 10 000 parameters and 1 000 000 data points, we need to compute 10 000 x 1 000 000 = 10¹⁰ derivatives.

This can take a LOT of time.

For example, if the model contains 10 000 parameters and 1 000 000 data points, we need to compute 10 000 x 1 000 000 = 10¹⁰ derivatives.

This can take a LOT of time.

Gradient descent has a really simple and intuitive explanation.

The algorithm is easy to understand once you realize that it is basically hill climbing with a really simple strategy.

Let's see how it works!

🧵 👇🏽

The algorithm is easy to understand once you realize that it is basically hill climbing with a really simple strategy.

Let's see how it works!

🧵 👇🏽

For functions of one variable, the gradient is simply the derivative of the function.

The derivative expresses the slope of the function's tangent plane, but it can also be viewed as a one-dimensional vector!

The derivative expresses the slope of the function's tangent plane, but it can also be viewed as a one-dimensional vector!

When the function is increasing, the derivative is positive. When decreasing, it is negative.

Translating this to the language of vectors, it means that the "gradient" points to the direction of the increase!

This is the key to understand gradient descent.

Translating this to the language of vectors, it means that the "gradient" points to the direction of the increase!

This is the key to understand gradient descent.

Sigmoid is one of most commonly used activation functions.

However, it has a serious weakness: Sigmoids often make the gradient disappear.

This can leave the network stuck during training, so they effectively stop learning.

How can this happen?

🧵 👇🏽

However, it has a serious weakness: Sigmoids often make the gradient disappear.

This can leave the network stuck during training, so they effectively stop learning.

How can this happen?

🧵 👇🏽

What if you want to optimize a function, but every evaluation costs you $100 and takes a day to execute?

Algorithms like gradient descent build on two key assumptions:

• function is differentiable,

• and you can calculate it on demand.

What if this is not the case?

🧵 👇🏽

Algorithms like gradient descent build on two key assumptions:

• function is differentiable,

• and you can calculate it on demand.

What if this is not the case?

🧵 👇🏽

For example, you want to tune the hyperparameters of a model that requires 24 hours of GPU time to train.

Can you find a good enough value under reasonable time and budget?

One method is the so-called Bayesian optimization.

Can you find a good enough value under reasonable time and budget?

One method is the so-called Bayesian optimization.

Building a good training dataset is harder than you think.

For example, you can have millions of unlabelled data points, but only have the resources to label a thousand.

This is a story is about a case that I used to encounter almost every day in my work.

🧵 👇🏽

For example, you can have millions of unlabelled data points, but only have the resources to label a thousand.

This is a story is about a case that I used to encounter almost every day in my work.

🧵 👇🏽

Do you know how new drugs are developed?

Essentially, thousands of candidate molecules are tested to see if they have the targeted effect. First, the testing is done on cell cultures.

Sometimes, there is no better option than scanning through libraries of molecules.

Essentially, thousands of candidate molecules are tested to see if they have the targeted effect. First, the testing is done on cell cultures.

Sometimes, there is no better option than scanning through libraries of molecules.

After cells are treated with a given molecule (or molecules in some cases), the effects are studied by screening them with microscopy.

The treated cells can exhibit hundreds of different phenotypes ( = classes), some of them might be very rare.

The treated cells can exhibit hundreds of different phenotypes ( = classes), some of them might be very rare.