Why does #nbdev do such weird namings for your notebook, such as "00_core.ipynb?"

There's actually a few reasons. Let's talk about that 🧵
First, it helps keep things organized module wise. Having everything numerical let's you section off by groups how certain segments of code are laid out.

An example of this is in @fastdotai, where notebooks starting with 20 are generally vision tutorials
But there's ~actually~ a second reason why this can be super cool!

In GitHub, currently when we run the tests for our notebooks, we run them all at once through calling `nbdev_test_nbs`. But we can actually speed this up by calling ~groups~ of notebooks! How does this work?
GitHub actions has a concept called "runners." Where essentially I can group together certain parts to run in different environments and customizations on their own, to speed things up. And these are controlled through a "matrix". An example can be seen here:
So what winds up happening is we split up one job into roughly 8 to 12 jobs for just the notebooks, as only small sections of notebooks get tested at a time.

This both allows us to isolate problematic areas in the library, ~and~ drastically speed up the time it takes!
To use this, we just modify the "testing" section of our GitHub action to utilize this matrix:
I will say a common error / problem to watch out for with this method though is race conditions. I had to toy with a decent notebook split to where GitHub actions didn't yell at me. It does take some tweaking, but we could speed up testing from 17 minutes to 7 :)
*For AdaptNLP

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Zach Mueller

Zach Mueller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TheZachMueller

9 Nov
So. Post graduation plans:

- @jeremyphoward's Matrix Calculus for DL
- @math_rachel's Computational Linear Algebra

- W&B's Math for ML

(In this order)
Computational linear algebra:

github.com/fastai/numeric…
Read 4 tweets
26 Aug
I'm going to be releasing a video extremely soon on my journey through fastai, open source, and how it all merges together. In the meantime, I wanted to outline below the Software Design and Development program my school (@UWF) offers semester by semester: 🧵
Semester 1:

- C++ Programming. Getting you used to the nuances of memory, objects, variables, and so forth
Semester 2:

- Algorithm and Program Design. Using Github and more typical "algorithms" that traditional SE tracks would see

- Discrete Structures: Computer math

- Database Systems: Using flat-file databases and understanding how they work

Note: From here every class used git
Read 13 tweets
25 Aug
Friends that are familiar with @github actions, is it possible to deploy to gh pages by using files generated *from* an action? I don't mean building to another branch and then deploying that branch (that I can do), I mean using in-memory files to deploy from
@github Scenario: I have a bunch of .md's I've made to build some docs, but I want to segment out another git repository that handles things like the Gemfiles and whatnot. What this action should do is pull those gemfiles (which I can already do) and then deploy on this current state
@github And not just send it off to another branch to deploy.

What this aims to alleviate is having to update n-teen Gemfiles when there's a security flaw somewhere
Read 4 tweets
24 Aug
As promised, here is our textbooks and optional readings for Software Engineering 2 and Software Engineering Management. (There were none for SE1) 🧵
Software Engineering 2:

Martin, Robert. Clean Architecture: A Craftman’s Guide to Software Structure
and Design. Prentice Hall, 2018. ISBN: 978-0134494166

We discussed how one properly deals with writing clean code and handling issues within them 2/
Such as alleviating cyclical dependencies, how flexibility of code and narrowing of a system interact with each other, as well as how to find that balance.

Amazon link:

amazon.com/Clean-Architec…
Read 6 tweets
7 Aug
How can you learn to use the @fastdotai framework to its fullest extent? A thread on what I believe is the most important lesson you can teach yourself: 👇

1/
First: fixing a misconception. At its core, fastai is just PyTorch. It uses torch tensor, trains with the torch autograd system, and uses torch models.

Don't believe me? Let's talk about how you can learn to see this and utilize it

2/
Start with the PETS datablock + dataloaders, but write your own pytorch loop. Use cnn_learner if you want to start, and grab the model in learn.model

Train your model in the torch loop, and see that we can achieve (similar) scores.

3/
Read 8 tweets
2 Jun
I've written a notebook showing three additional functions for helping you navigate the @fastdotai source code and save you potentially hours of time: gist.github.com/muellerzr/3302…

1/
Why did I write this? Navigating the source code for fastai can be hard sometimes, especially trying to consolidate all the patch and typedispatch functionalities (especially because typedispatch doesn't show up in the __all__!)

So, how does this work? 2/
The first function, `trace_class`, will grab all functions in a class up to a particular library. This is helpful in scenarios where a class inherits something (like a torch tensor) but you only care about the super class level (like TensorBase):

3/
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!

:(