For a deeper dive, it expects you to have a reasonable understanding of calculus, linear algebra, probabilities, and statistics.
The book is organized into two parts:
▫️ The Fundamentals of Machine Learning
▫️ Neural Networks and Deep Learning
Here is the outline of what's covered: 👇
Part I - The Fundamentals of Machine Learning
▫️ What is machine learning?
▫️ Steps in a machine learning project
▫️ Learning by fitting a model to data
▫️ Optimizing a cost function
▫️ Handling, cleaning and preparing data
▫️ Selecting and engineering features
(...)
(...)
▫️ Selecting a model
▫️ Tuning hyperparameters
▫️ Cross-validation
▫️ Challenges with machine learning
▫️ Common learning algorithms
▫️ Curse of dimensionality
▫️ Unsupervised learning techniques
Part II - Neural Networks and Deep Learning
▫️ What is a neural net?
▫️ Building and training neural nets
▫️ TensorFlow and Keras
▫️ Feedforward neural nets
▫️ Convolutional networks
▫️ Recurrent networks
▫️ LSTM
▫️ Encoder / Decoders
▫️ Transformers
(...)
(...)
▫️ Autoencoders
▫️ Generative adversarial networks
▫️ Training techniques
▫️ Reinforcement learning
▫️ Loading and preprocessing a large amount of data
▫️ Training and deploying models at scale
When I bought the book, I felt intimidated by it: it's thick and heavy!
At the time, almost every machine learning book I had was hard to read —full of formulas and lingo.
This book is different.
This book is for you and me.
Now, I open it every week.
I'm either looking for something specific, or I simply read a couple of pages.
It never ceases to amaze me how much knowledge it packs!
Thanks, @aureliengeron! I really love what you did here!
• • •
Missing some Tweet in this thread? You can try to
force a refresh