The single best way to get into machine learning is to build something with it.
Here is an extensive list of hands-on projects that you can start right now. Take inspiration, learn tools, and find the topics you are passionate about.
Read on and go create something awesome. ↓
I am grouping the projects into the following categories.
These hands-on projects work the best when you
• follow along and do the coding as well,
• understand why and how things work,
• and try to bring what you built to the next level.
Most of these projects are just scratching the surface, but they are excellent for getting your feet wet and inspiring you to take them further.
The real work starts when you start building by yourself.
The reason PhD school is difficult is not because of the research.
Besides that, there are several key choices whose importance is underestimated by the students. Most of them are unrelated to your hard skills.
Here are the most impactful ones. ↓
1. Picking your advisor.
Young researchers usually value fame and prestige over personal relations. However, your advisor and your fellow labmates will determine your everyday work environment.
Don't sacrifice this for some scientific pedigree.
A healthy relationship with your advisor is essential for your professional performance. Pick someone who is not only a good scientist but a good person as well. Avoid abusive personalities.
Interview students and lab alumni about your prospective advisor if you can.
There is one big reason we love the logarithm function in machine learning.
Logarithms help us reduce complexity by turning multiplication into addition. You might not know it, but they are behind a lot of things in machine learning.
Here is the entire story.
🧵 👇🏽
First, let's start with the definition of the logarithm.
The base 𝑎 logarithm of 𝑏 is simply the solution of the equation 𝑎ˣ = 𝑏.
Despite its simplicity, it has many useful properties that we take advantage of all the time.
You can think of the logarithm as the inverse of exponentiation.
Because of this, it turns multiplication into addition. Exponentiation does the opposite: it turns addition into multiplication.
(The base is often assumed to be a fixed constant. Thus, it can be omitted.)
🤔 Should you learn mathematics for machine learning?
Let's do a thought experiment! Imagine moving to a new country without speaking the language and knowing the way of life. However, you have a smartphone and a reliable internet connection.
How do you start exploring?
1/8
With Google Maps and a credit card, you can do many awesome things there: explore the city, eat in nice restaurants, have a good time.
You can do the groceries every day without speaking a word: just put the stuff in your basket and swipe your card at the cashier.
2/8
After a few months, you'll start to pick up some language as well—simple things, like saying greetings or introducing yourself. You are off to a good start!
There are built-in solutions for common tasks that just work. Food ordering services, public transportation, etc.
3/8
Matrices are the basic building blocks of learning algorithms.
Multiplying the data vectors with a matrix is equivalent to transforming the feature space. We think about this as a "black box", but there is a lot to discover.
For one, how they change the volume of objects.
This is described by the determinant of the matrix, which is given by
• how the transformation scales the volume,
• and how it changes the orientation of basis vectors.
The determinant is given by the formula below. I am a mathematician, and even I find this intimidating.