Every aspiring data scientist I talk to is overwhelmed by the colossal amount of online courses to choose from π€―
My solution to this problem β
Learning is about connecting the dots.
However, it feels like there are too many dots to connect when learning data science.
Too many courses...
Too many blog posts...
Too many technologies...
Solution: You need to change the way you learn.
As a professional data scientist, you are expected to be a problem-solver for the company or institution you work for.
You need to be good at building data science products that solve business problems.
And for that, you don't need to be an expert in Python, for example.
Instead, you need to know *enough Python* to build your solution.
And to know how much Python is *enough* you must start your learning from the end goal.
Do not learn, and then start.
Start, and then learn.
Pick a project you are interested in.
For example, if you are into computer vision, you can set yourself this goal:
"I want to build a REST API that does face recognition".
Starting from the end goal puts your mind in the "problem-solving" mode.
So you start asking yourself the right questions.
And you start googling.
#question 1: "Is there a public dataset with human faces I can use to build my model".
And you happen to find a blog post with a list of relevant datasets of human faces. You investigate a bit further and find the one that seems the best for your use case.
Boom.
#question 2: "What ML model is good for face recognition?"
And you find 5 tutorials on Youtube that covers the model-building phase.
2 minutes later you pick the one that uses PyTorch, because you are more interested in PyTorch than Tensorflow.
And you discover Python Flask, an easy-to-use library that is designed just for that.
You find a quick-start tutorial that shares a basic skeleton for a REST API in Flask.
Boom.
#question 4: "How do I make my API accessible to anyone on the Internet?"
And you discover a few of the most popular ways to deploy ML APIs, from beginner-friendly options like Streamlit to more advanced solutions like AWS API Gateway.
So you pick Heroku, a good middle ground.
To build this project you will need *just enough Python*, to
β train your ML model
β build a REST API
And you don't need to become a Python expert to do these things.
When you learn by building projects, you get 2 things:
1 β Become a problem-solver. And this is what separates a senior from a junior data scientist.
2 β Build a portfolio. Every project you make is another valuable asset you will have when looking for jobs.
In conclusion,
β You learn by solving a specific problem.
β When you start from the end goal, you *think* and you learn just enough of each technology that lets you build your solution.
β As a bonus, you build a portfolio that will greatly help you when looking for jobs.
Job postings for entry-level data scientists are nonsense π
Don't try to fit all their requirements.
This is what you need to do instead βββ
Do not try to tick all the boxes in these long job postings.
Because you will go crazy.
And because it is a lie you need to rock at Python, SQL, ETL design, data visualization, Deep Learning, and Methapyisics to land an entry-level job in data science.
If so, why are companies asking all these things?
Well, because most of them do not have a clue about data science, so they Copy+Paste the job descriptions they see in top tech companies.
Fear of missing out (FOMO) pushes normal companies to ask for things they do not even need
Here are 2 steps that every real-world ML problem has...
... that you won't learn in Kaggle βββ
β‘οΈ From business problem to ML problem
Every Kaggle competition starts with a clearly defined target metric you need to optimize for.
But, in real-world ML, there is no target metric waiting for you.
It is your job to translate a business problem into an ML problem, by finding the right proxy metric.
This proxy metric is a quantitative and abstract metric, that positively correlates with the actual business metric you want to impact, e.g. accuracy, precision...
All ML systems can be decomposed into 3 pipelines (aka programs):
β Feature pipeline
β Training pipeline
β Inference pipeline
And this is how they work β
The feature pipeline takes raw data, from
- a data warehouse
- an external API, or
- a website, through scrapping
and generate features, aka the inputs for your ML model, and stores them in a Feature Store so that the other 2 pipelines can later use these features.
The training pipeline takes the features from the store and outputs a trained ML model.
These are (in general) the best models for each domain:
-Tabular data β XGBoost
- Computer Vision β Fine-tune a Convolutional Neural Net
- NLP β Fine-tune a Transformer net.