Aurélien Geron Profile picture
Author of the book Hands-On #MachineLearning with #ScikitLearn, #Keras and #TensorFlow. Former PM of #YouTube video classification. Founder of telco operator.
Sercan Ahi Profile picture Leo Tideman Profile picture Dave Liu Profile picture 3 subscribed
Feb 14, 2022 9 tweets 2 min read
1/ Here's a philosophical question for you: what's the level of elephantness of a tree?

Elephants are highly complex, they have all sorts of properties, so the question is meaningless unless I specify exactly which properties I'm referring to, and how to score them. 2/ So let me be specific:

elephantness = weight × trunk length

Sounds reasonable. And... Oh wow, it turns out that trees are super-elephants. And cars are pretty elephanty too. Who would have guessed?
Oct 12, 2021 15 tweets 4 min read
My favorite Python 3.5 feature: the matrix multiplication operator @
👇Python features thread👇 Other Python 3.5 features I often use:
- subprocess.run()
- math.isclose()

Big 3.5 features I don't really use much:
- Coroutines with async and await
- Type hints
👇
Mar 27, 2019 5 tweets 2 min read
Sometimes validation loss < training loss. Ever wondered why? 1/5 The most common reason is regularization (e.g., dropout), since it applies during training, but not during validation & testing. If we add the regularization loss to the validation loss, things look much different. 2/5
Mar 24, 2018 8 tweets 2 min read
If you are confused about likelihood functions and maximum likelihood estimation, this little diagram might help. 1/8 Consider a probabilistic model f(x; θ) (top left). If you set the model parameter θ (top left, black horizontal line), you get a probability distribution over x (lower left). In this case, x is a continuous variable, so we get a probability density function (PDF). 2/8