Today marks an extremely exciting day for fans of #nbdev, I'm releasing a new project, "nbdev-extensions"! This pypi package will contain features myself and others have thought of and I've brought to life in the nbdev framework for everyone to try!
The first extension is a `new_nb` command. This will quickly generate a new blank template notebook for you to immediately dive into as you're exploring nbdev, and is fully configurable for how your notebook's content should be:
2/5
The second (and my favorite) extension is a new note annotation tool I'm calling "Code Notes". Take a code cell above, and in markdown cells below you write notes on particular sections of that code. The documentation will reflect these notes in a beautiful table:
3/5
This allows you to enter more of a flow state, your code above can still be clear and how you originally wrote it, but now your documentation (or notes) can detail whatever you'd like in further explanation, it's a win-win!
4/5
If you're interested in #nbdev, please try this library out! And if you do like it, make sure to give it a ⭐ on Github, and myself (@TheZachMueller) a follow to keep up with the latest and greatest extensions I come up with 😄
New article on #python decorators is out! Specifically this shows you how decorators are written, what they do, and the power you can do with them. I even show an example of when you'd use the strange "nonlocal" 1/3 muellerzr.github.io/fastblog/pytho…
Context manager sequel should be out in the next few days. This one will take a bit longer because in some cases decorators are context managers, and they also have a few more rules so it'll take some time for me to get that how I want it :) 2/3
The other aim with these two is to give you easy-to-view boilerplate examples of decorators and context managers to play with, and explain how they work.
Why? Because I've been wanting those for many months now, and could really use them myself for reference 3/3
Listened to everyone's response with the new `no_sync` wrapper in @huggingface's Accelerate and I took it to heart.
Here's our new gradient accumulation context manager available in Accelerate dev now! A thread on design choices and the struggles 1/4🧵
@huggingface The goal with Accelerate is abstract as very little as we possibly can for you to perform what you want on any training device (CPU, multi-gpu, etc). As a result, it came to a decision of "how can we simplify gradient accumulation, without hiding anything?" 2/4
@huggingface A compromise was found, where instead we focus on deleting your duplicated code that would come from performing gradient accumulation and also help with the loss as well. It doesn't reduce the clarity of the code, and lets it be consistent across platforms 3/4
A few tips and tricks I learned about @Docker today and keeping image sizes small 🧵
Use a multi-stage approach to keep the resulting image lightweight by pre-compiling all of the installs and then just bringing in those installed files to the end image. I could save 500mbs + in some cases by doing this
The second trick I learned (which should be an obvious one!) is to install the direct torch wheel based on what you're using. For example, if you're using CPU but don't specify the CPU wheel, your docker image can be 2gb when in reality it only needs to be 800mb's or so!
Tonight we're talking about @fastdotai's `tabular_learner`, and more specifically the TabularModel 🧵
The role of the `tabular_learner` is to mostly build a `TabularModel` for your data. This tabular model is a series of embedding matrices and some batch normalization, before going through a few rounds of LinBnDrop, as shown below 2/
What makes this model different from all other models that @fastdotai has is that it splits our inputs into **two** separate groups, the categorical and continuous, meaning the model expects a tuple:
What is @fastdotai's `cnn_learner`, and what magic does it do? 🧵
The `cnn_learner` builds a fastai Learner designed for specifically vision transfer learning, using some of the best practical practices.
We start with a baseline `arch`, such as a resnet34, cut off the last layer, and introduce a @fastdotai head (such as below) for our task 2/
Along with this, we freeze the backbone of the architecture (which means set the params to not trainable) and only train the head (that Custom Head) of the model. 3/