Math is not very important when you are using a machine learning method to solve your problem.

Everybody that disagrees, should study the 92-page appendix of the Self-normalizing networks (SNN) paper, before using
torch.nn.SELU.

And the core idea of SNN is actually simple 👇 ImageImageImageImage
SNNs use an activation function called Scaled Exponential Linear Unit (SELU) that is pretty simple to define.

It has the advantage that the activations converge to zero mean and unit variance, which allows training of deeper networks and employing strong regularization.

👇 ImageImage
There are implementations both in PyTorch (torch.nn.SELU) and TensorFlow (tf.keras.activations.selu).

You need to be careful to use the correct initialization function and dropout, but this is well documented.

The code is open-source as well: github.com/bioinf-jku/SNNs

👇
There is complicated math to prove that SNNs have some desirable properties, but you don't need to understand the proof in order to use SNNs on your problem.

Like you don't need to understand assembler in order to write python code.

It's a different level of abstraction 👇
And now if you want to read the paper and particularly the legendary appendix, you can do that here:

arxiv.org/pdf/1706.02515…

Or simply follow the Appendix on Twitter @SELUAppendix 😂
There seems to be a related discussion on Hacker News (I just saw it, the tweet wasn't inspired by it). There are some interesting opinions there... 😄 ImageImageImageImage
I would argue that domain knowledge is way more important than advanced math! Some basic math skill are for sure needed.
This is a good point about intuition and experimentation.

Good example is belief propagation in conditional random fields (CRF). You can prove mathematically that it doesn't work in the general case. However, for typical CV problems, it works well!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with haltakov.eth | (🤖,🧠)

haltakov.eth | (🤖,🧠) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Feb 21
This is like an NFT in the physical world

This is a special edition BMW 8 series painted by the famous artist Jeff Koons. A limited-edition of 99 with a price of $350K - about $200K more than the regular M850i.

If you think about it, you'll see many similarities with NFTs

👇 Image
Artificially scarce

BMW can surely produce (mint 😅) more than 99 cars with this paint. The collection size is limited artificially in order to make it more exclusive.

Same as most NFT collections - they create artificial scarcity.

👇
Its price comes from the story

The $200K premium for the "paint" is purely motivated by the story around this car - it is exclusive, it is created by a famous artist, it is a BMW Art Car.

It is not faster, more reliable, or more economic. You are paying for the story.

👇
Read 10 tweets
Feb 18
Did you ever want to learn how to read ROC curves? 📈🤔

This is something you will encounter a lot when analyzing the performance of machine learning models.

Let me help you understand them 👇

#RepostFriday
What does ROC mean?

ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.

Think about these curves as True Positive Rate vs. False Positive Rate plots.

Now, let's dive in 👇
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.

This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!

Now the example 👇
Read 19 tweets
Feb 17
It sucks if your ML model can't achieve good performance, but it is even worse if you don't know it!

Sometimes you follow all the best practices and your experiments show your model performing very well, but it fails when deployed.

A thread about Sampling Bias 👇
There is a lot of information about rules you need to follow when evaluating your machine learning model:

▪️ Balance your dataset
▪️ Use the right metric
▪️ Use high-quality labels
▪️ Split your training and test data
▪️ Perform cross-validation

But this may not be enough 👇
A common problem when evaluating an ML model is the Sampling Bias.

This means that your dataset contains more samples of some part of the underlying distribution than others.

Some examples 👇
Read 12 tweets
Jan 18
The Internet is already decentralized, why do we need web3? 🤔

This is a common critique of web3. However, decentralization on its own is not always enough - sometimes we need to agree on a set of facts.

Blockchains give us a consensus mechanism for that!

Thread 🧵

1/12
The Internet is built of servers that communicate using open protocols like HTTP, SMTP, WebRTC etc. Everybody can set up a server and participate. It is decentralized!

However, if two servers distribute contradicting information, how do you know which one is right?

2/12
This is what blockchains give us, a way for decentralized parties to agree on one set of facts. They offer a consensus mechanism!

Imagine the blockchain as a global public database that anybody can read and nobody can falsify - every transaction/change needs to be signed.

3/12
Read 15 tweets
Jan 18
How decentralized is web3 really?

While there is a lot of hype around web3, NFTs, and decentralized apps (dApps), there is also a lot of criticism. Today, I'll focus on the critique that web3 is actually too centralized.

Let's try to have an honest discussion 👇
These are the main arguments I see regularly. Please add more in the comments.

1️⃣ The Internet is already decentralized
2️⃣ It is inefficient
3️⃣ Everything can be implemented better using a centralized approach
4️⃣ Important services are centralized

👇
I was inspired to write this in part after reading this great article by @moxie pointing some of the problems with the current state of web3. If you've been living under a rock in the last weeks, make sure you check it out:

moxie.org/2022/01/07/web…

👇
Read 9 tweets
Jan 17
How many parameters do you need in your neural network to solve any problem? 🤔

GPT-3 has 175 billion, MT-NLG has 530 billion and Wu Dao has 1.75 trillion.

But the truth is you only need 1 parameter. No, not 1 billion. Just a single parameter!

Let me explain 👇
Yes, of course, I'm trolling you, but only a little bit 😁

I want to show you this very cool work by @ranlot75 about how to fit an arbitrary dataset with a single parameter and the following function

github.com/Ranlot/single-…

👇
@ranlot75 Here are examples of some 2D image datasets. You see the parameter alpha and the reconstructed image.

Now, let me give you some high-level intuition how this works 👇
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(