We can't forget the forums! Easily my one-stop-shop if I ever have a question about *anything* in fastai. Even if it's just for a casual stroll through the forums, there is something available in there for everyone to learn with a welcoming community! forums.fast.ai
Trailing back and forth between the two as I look into and use nbdev let's me quickly see how fastai's (suite) "magic" functions work, and let's me learn how to utilize them more
...
Then we have @amaarora's timmdocs, an additional documentation to @wightmanr's timm library that he's been doing an *excellent* job on:
Now onto the bias, we have my walkwithfastai (walkwithfastai.com) which consists of numerous other tutorials and different DataBlock methods that aren't shown in the fastai documentation, and also contains my API-forward approach towards teaching @fastdotai
...
Additional mentions:
The fastdot library (fastdot.fast.ai) is what I do 99% of my model visualizations when writing presentations for work or conferences
The fastrelease library (fastrelease.fast.ai) streamlining pip and conda releases as well as release notes
fastpages (fastpages.fast.ai), easily the *quickest* and easiest way to get your own blog up and running, utilizing either a simple word document, Markdown, or my favorite: Jupyter Notebooks. My own blog with it is at muellerzr.github.io/fastblog
Can anyone tell me the difference between this @fastdotai code? How does it behave differently:
Answer: There isn't!
One is more "verbose" in my opinion (get_x and get_y), but neither lose any flexibility as you use it.
2/
But what if I have three "blocks"? Don't I need to use `getters` for that?
Not necessarily. Since we can declare an `n_inp` (as seen below), we can simply load in our `get_x` or `get_y` with multiple functions to be utilized instead for those types:
lib2nbdev has been on my mind for months now, and finally it exists! What's the biggest struggle with @fastdotai's #nbdev? Using it on existing projects. This tool aims to fix that.
You are then guided through the process of setting up nbdev's `setting.ini` file and afterwards all of your existing code will be converted directly into fully functional notebooks!
But wait, don't you just throw it all into one big cell?
NO! Instead lib2nbdev will determine what should be private or public, what's an import, and what particular cell tag it should be generated with, such as below which has both private and public tags:
Thanks everyone for joining me on this much more brief stream, in this thread I'll try and summarize all we discussed including:
- x is not implemented for y
- LossMetrics
- TensorBase and tensor metadata
And here is the stream:
1/
.@fastdotai had utilized their own tensor subclassing system in v2, however originally there weren't many issues as fastai just "let" you do things with these classes. Then @PyTorch came along and introduced them in 1.7. Suddenly, people were getting this! Why?
Pytorch became much more explicit in what subclasses can interact with oneanother. As a result @fastdotai had to make the TensorBase class which allows for any tensor-types to interact with each other. A quick function to convert and an example are below 3/
Thank you all SO much for joining me tonight, it was a ton of fun to be back in the streaming/teaching wheelhouse again! We had almost 300 people chime in! For those that missed out, the video link is here:
1/
The notebooks are also available here, and eventually I'll have them fully integrated into walkwithfastai.com, just need to figure out a good way first! 2/ github.com/walkwithfastai…
From what I gathered from the feedback, it seems this stream really did help open up everyone's eyes to the power of @fastdotai's Callback system, and how it's training loop works. It's not just a marketing phrase when they say you can modify ~everything~ in the training loop! 3/
Have you wanted to know just what goes on in @fastdotai's training loop and just what those pesky Callbacks are up to? In the latest release of Walk with @fastdotai this is now possible!
What we've done here is extended fastai's existing `show_training_loop` functionality to include some information (from a provided doc string) of what exactly is occurring during each event. This then lets you get more familiar with what fastai's training loop, ... 2/
Understand when things are called, and what happens at each step. The old version (which is still there if you pass `verbose=False`) simply just showed what Callbacks were triggered (pictured below)