According to the latest results by Google AI (referenced in the first tweet), this is not the case!
There are quantitative measurements that can tell if representations 𝑋ᵢ and 𝑋ⱼ are similar. In this particular paper, the Centered Kernel Alignment was used. (CKA paper: arxiv.org/pdf/1905.00414…)
It turns out that the representations 𝑋ᵢ for many consecutive layers are often similar. Visualizing the CKA similarities on a heatmap, a block structure emerges.
This means that there are large sections that are doing nothing.
This observation raises several interesting questions, like
• Is the emergence of block structure dependent on the dataset?
• Can redundant layers be removed?
These and many more are answered in the aforementioned paper by Thao Nguyen, Maithra Raghu, and Simon Kornblith.
IMO this is an extremely important research area. Since the size of networks can be absolutely crazy (see GPT-3), reducing them can make a significant impact in the long run, both on the applicability of the technology and on its environmental footprint.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One of the biggest misconceptions regarding education is that its main purpose is to give knowledge you can immediately use.
It is not.
The best thing education can give you is the mental agility to obtain knowledge at the speed of light.
Let's unpack this idea a bit!
1/7
Consider a course where you build a custom neural network framework with NumPy.
This is hardly usable in practice: working with a custom library is insane.
However, if you know how they are built, you only need to learn the interface to master an actual framework!
2/7
By understanding how the framework is built and how the underlying algorithms work, you'll be able to do much more: experiment with custom optimizers, implement your own layers, etc.
3/7
Machine learning has enabled scientific breakthroughs in several fields.
Biotechnology is one of the most fascinating, as researchers could perform mindblowing tasks with the new tools.
Here are my favorite problems that machine learning helps to solve!
🧵 👇🏽
These are the topics we are going to talk about:
1. Predicting protein structure from amino acid sequences. 2. Accelerating high-throughput screening for drug discovery. 3. Mapping out the human cell atlas. 4. Precision medicine.
Let's dive in!
1. Predicting protein structure from amino acid sequences.
Proteins are the workhorses of biology. In our body, myriads of processes are controlled by proteins. They enable life. Yet compared to their importance, we know so little about them!
In the last 24 hours, more than 400 of you decided to follow me. Thank you, I am honored!
As you probably know, I love explaining complex machine learning concepts simply. I have collected some of my past threads for you to make sure you don't miss out on them.