25 True|False machine learning questions that are horrible for interviews but pretty fun to answer.
Most importantly: they will make you think and will keep your knowledge sharp.
These are mostly beginner-friendly.
↓
↑
1. A "categorical feature" is a feature that can only take a limited number of possible values.
2. Precision is a performance metric that defines a classification model's ability to identify only relevant samples.
↓
↑
3. Recall is a performance metric that defines a classification model's ability to identify all relevant samples.
4. One-hot encoding is an excellent solution to transform categorical features with high cardinality.
↓
↑
5. The F1 Score is a metric defined as the harmonic mean of precision, recall, and accuracy.
6. As the number of hidden layers increases in a neural network, its capacity also increases.
↓
↑
7. Initializing a neural network's weights with zero is our best bet to allow the network to converge.
8. As the dropout ratio used in a neural network increases, the network's capacity also increases.
↓
↑
9. Stochastic Gradient Descent is the proper technique to use when the full dataset doesn't fit in memory.
10. Batch Normalization is an efficient backpropagation technique that allows neural networks to learn.
↓
↑
11. Gradient Descent is an algorithm used to minimize overfitting in neural networks.
12. Having high bias means the model is too simple and can't capture many features during the training phase. This is also known as overfitting.
↓
↑
13. Softmax is an activation function that always returns the input value if it's positive and zeroes otherwise.
14. In neural networks, an activation function's most essential role is to decide whether a unit (or neuron) should fire.
↓
↑
15. Using a learning rate that's too low will cause the training process to be very slow.
16. ReLU (or Rectified Linear Unit) is an activation function that, given an input vector, generates an output where the sum of the values in the vector is equal to one.
↓
↑
17. Your model's accuracy can't be used as a loss function to train a neural network.
18. An autoencoder is a neural network that automatically learns by using its inputs as the expected output.
19. Convolutional neural networks are translation invariant.
↓
↑
20. Autoencoders are one of the most popular supervised learning methods.
21. The type of problem where you need to classify the input into a single class is called "multi-class classification."
↓
↑
22. Dropout is an excellent regularization technique that speeds the training process.
23. A neural network with a single layer is capable of approximating any function.
↓
↑
24. Leaky ReLU is an activation function that allows the passing of small-sized negative values if the network's input value is less than zero.
25. Bagging is the concept of splitting a dataset and randomly placing it into bags for training a model.
↓
↑
If you want to check your answers, take a look at this picture. Every row contains 5 of the answers, with 1 = True and 0 = False.
If you found this helpful, follow me, and let's keep spreading machine learning content all over Twitter!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
When we start with machine learning, we learn to split our datasets in testing and training by taking a percentage of the data.
Unfortunately, this practice could lead to overestimating the performance of your model.
↓ 1/7
Imagine a dataset of pictures with people doing signals with their hands.
As we were told, we take 70% of the images for training and the remaining 30% for testing. We are careful to maintain the original ratio between classes.
How could this be a problem?
↓ 2/7
There are a lot of pictures of Mary in the dataset. She is showing different signals with her hands.
Also Joe. He was a model too that participated in the creation of the dataset.
Coming soon, in Python 🐍 3.10: "Pattern Matching."
Looks sick!
No, this is not a switch statement. Pattern matching is very different.
With patterns, you get a small language to describe the structure of the values you want to match. Look at one of the examples to see how you can match an element of a tuple.
You can use patterns to match even more complex structures. You can nest them. You can have redundancy checking.
Pattern matching is a feature you can find in functional languages.
It's excellent that Python decided to add it! I'm really excited about it.