So @jeremyphoward and @HamelHusain's new nbprocess library. (or nbdev v2). It's going to be a *GAME CHANGER*!

In a *single afternoon* I managed to create a module that lets you export #nbdev tests into pytest submodules automatically: github.com/muellerzr/nbpr…
This is a gamechanger because now you don't have to worry about if need to keep your tests in your notebook. They also exist as unittest modules (or could be pytest) with ease. The craziest part about this for me is it's < 100 lines of code! TOTAL!
This isn't just nbdev version 2. It's my ~dream~ for what I wanted nbdev to be in the talk I gave in November. Is it all there yet? God no.

But I just hit one of the major points in less than a day. This will change the literate programming landscape.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Zach Mueller

Zach Mueller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TheZachMueller

May 19
A few tips and tricks I learned about @Docker today and keeping image sizes small 🧵
Use a multi-stage approach to keep the resulting image lightweight by pre-compiling all of the installs and then just bringing in those installed files to the end image. I could save 500mbs + in some cases by doing this
The second trick I learned (which should be an obvious one!) is to install the direct torch wheel based on what you're using. For example, if you're using CPU but don't specify the CPU wheel, your docker image can be 2gb when in reality it only needs to be 800mb's or so!
Read 4 tweets
Feb 6
Tonight we're talking about @fastdotai's `tabular_learner`, and more specifically the TabularModel 🧵
The role of the `tabular_learner` is to mostly build a `TabularModel` for your data. This tabular model is a series of embedding matrices and some batch normalization, before going through a few rounds of LinBnDrop, as shown below 2/
What makes this model different from all other models that @fastdotai has is that it splits our inputs into **two** separate groups, the categorical and continuous, meaning the model expects a tuple:

3/
Read 5 tweets
Feb 4
What is @fastdotai's `cnn_learner`, and what magic does it do? 🧵
The `cnn_learner` builds a fastai Learner designed for specifically vision transfer learning, using some of the best practical practices.

We start with a baseline `arch`, such as a resnet34, cut off the last layer, and introduce a @fastdotai head (such as below) for our task 2/
Along with this, we freeze the backbone of the architecture (which means set the params to not trainable) and only train the head (that Custom Head) of the model. 3/
Read 5 tweets
Nov 13, 2021
Gave it a second read through (I had the opportunity to read the first draft a while ago), below you can find a thread of my review, and some bits I enjoyed from it:
This book is an excellent companion to something like the @fastdotai book, course, or Walk with fastai. It explores some areas differently than what is presented in the course, which can perhaps help folks get a better grasp of some concepts. 1/
This is a small detail, but I really liked the fact that each dataset referenced in the book HAD an actual reference. It was small, I'm not sure how commonplace that is normally, but it was something that surprised me (in a good way) 2/
Read 7 tweets
Nov 11, 2021
It's always a welcome surprise when I see fastinference being used 😁
🤯Okay, actually numbers I DID NOT expect. The last release of fastinference was in MARCH... Image
Perhaps it's time for me to revisit fastinference?

What are some things that folks wish it could do?
Read 4 tweets
Nov 10, 2021
Why does #nbdev do such weird namings for your notebook, such as "00_core.ipynb?"

There's actually a few reasons. Let's talk about that 🧵
First, it helps keep things organized module wise. Having everything numerical let's you section off by groups how certain segments of code are laid out.

An example of this is in @fastdotai, where notebooks starting with 20 are generally vision tutorials
But there's ~actually~ a second reason why this can be super cool!

In GitHub, currently when we run the tests for our notebooks, we run them all at once through calling `nbdev_test_nbs`. But we can actually speed this up by calling ~groups~ of notebooks! How does this work?
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(