Yes, I also thought about that. What they do in the paper is essentially engineer a feature that does this, so it should definitely be possible for the discriminant to find it as well.
ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.
Here are some insights I found particulalry interesting π
"Neural networks are parallel computers"
That is why they are so powerful - you can train a generic computer to solve your problem. This is also the driver behind Software 2.0 - neural network are becoming more and more capable of solving all kinds of problems.
"Neural networks perform well on tasks that humans can perform very quickly"
Humans don't think much when listening, observing or performing simple tasks.
This means that a neural network can be trained to be good at it as well: NLP, computer vision and reinforcement learning.
My setup for recording videos for my machine learning course π₯
A lot of people asked about my setup the other day, so here a short thread on that. It's nothing fancy, but it does a good job π€·ββοΈ
Details π
Hardware βοΈ
βͺοΈ MacBook Pro (2015 model) - screen sharing and recording
βͺοΈ iPhone XS - using the back camera for video recording
βͺοΈ Omnidiretional external mic - connected to the iPhone
βͺοΈ Highly professional camera rig - books mostly about cooking and travel π
π
Software π»
βͺοΈ OBS Studio - recording of the screen and the camera image
βͺοΈ EpocCam - use your iPhone as a web cam. You can connect your iPhone both over WiFi and cable.
βͺοΈ Google Slides - for presentation
βͺοΈ Jupyter notebooks and Google Colab - for experimenting with code
Let's talk about a common problem in ML - imbalanced data βοΈ
Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?
Actually, this model is useless β
Let me explain π
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.
If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!
What can we do? π
Let me tell you about 3 ways of dealing with imbalanced data:
βͺοΈ Choose the right evaluation metric
βͺοΈ Undersampling your dataset
βͺοΈ Oversampling your dataset
βͺοΈ Adapting the loss