One of the very interesting questions that really got me thinking yesterday (they all did to an important degree) was from @Jeande_d regarding how to balance between learning foundational/transferable skills vs focusing on specific tools.
@Jeande_d My reasoning was that one should try hard not to learn too much of a tool, because any tool will eventually disappear. But tools are crucial to be productive, so one should still learn enough to really take advantage of the unique features of that tool.
@Jeande_d One way I think you can try to hit that sweet spot is practice some sort of dropout regularization on your common tool set.
In every new project, substitute one of your usual tools for some convenient alternative. It will make you a bit less productive, to be sure...
@Jeande_d ... and sometimes you really cannot afford that loss in productivity.
But try to force yourself to do it as often as possible. If I were to find out that I cannot really stop using, say, React, then I would be very worried about my future self.
@Jeande_d Right now, that tool for me is probably scikit-learn. There are some things that I cannot see myself doing without it, and that worries me. So next time I have to do some shallow machine learning, I want to use an alternative.
Any suggestions?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
❓ Today, I want to start discussing the different types of Machine Learning flavors we can find.
This is a very high-level overview. In later threads, we'll dive deeper into each paradigm... 👇🧵
Last time we talked about how Machine Learning works.
Basically, it's about having some source of experience E for solving a given task T, that allows us to find a program P which is (hopefully) optimal w.r.t. some metric M.
According to the nature of that experience, we can define different formulations, or flavors, of the learning process.
A useful distinction is whether we have an explicit goal or desired output, which gives rise to the definitions of 1️⃣ Supervised and 2️⃣ Unsupervised Learning 👇
A big problem with social and political sciences is that they *look* so intuitive and approachable that literally everyone has an opinion.
If I say "this is how quantum entanglement works" almost no one will dare to reply.
But if I say "this is how content moderation works"...
And the thing is, there is huge amount of actual, solid science on almost any socially relevant topic, and most of us are as uninformed in that as we are on any dark corner of particle physics.
We just believe we can have an opinion, because the topic seems less objective.
So we are paying a huge disrespect to social scientists, who have to deal every day with the false notion that what they have been researching for years is something that anyone, thinking for maybe five minutes, can weigh in. This is of course nonsense.
I've been a vocal opponent of the "neural networks are brain simulations" analogy, not because it's *wrong* but because I believe it's harmful for beginners.
I want to propose an alternative analogy for approaching deep learning from a dev background.
👇
Think about detecting a face in an image.
How would you even start to write a program for that?
You know it's gonna have something to do with finding a "nose" and two "eyes", but how can you go from an array of pixels to something that looks like an eye, in whatever position?
Now, suppose you have access to thousands of faces and non-faces.
How does that changes the problem?
Instead of thinking in the problem domain (finding faces) you can now take a leap upwards in abstraction, and think in the meta-problem domain (finding face finders).
A day to share with you amazing things from every corner of Computer Science.
Today I want to talk about Generative Adversarial Networks 👇
🍬 But let's begin with some eye candy.
Take a look at this mind-blowing 2-minute video and, if you like it, then read on, I'll tell you a couple of things about it...
Generative Adversarial Networks (GAN) have taken by surprise the machine learning world with their uncanny ability to generate hyper-realistic examples of human faces, cars, landscapes, and a lot of other stuff, as you just saw.