In my blog post about GitHub Copilot/Codex (tmabraham.github.io/blog/github_co…), I pointed out lack of knowledge of newer libraries like @fastdotai v2. Testing @OpenAI Codex yesterday, it provided an almost working (regex was off by one character😛) example of fastai v2 code
A few observations: 1. You have to specifically ask for fastai v2 code, but then the import needs to be changed "fastai2.vision.all" →"fastai.vision.all"
2. It has understanding of the differences between the fastai v1 and v2 APIs (correct use of ImageDataLoaders, the fine_tune function new to v2, use of item_tfms to resize before batching)
3. I went back to GitHub Copilot and tried to prompt it to write fastai v2 code but it seems to fail (only providing fastai v1 code). So it seems like the Codex models provided in OpenAI API are trained on more recent data.
Adding to the main thread a completely working (except for the fastai2 import) example with the Caltech 101 dataset:
The Tesla team discussed how they are using AI to crack Full Self Driving (FSD) at their Tesla AI Day event.
They introduced many cool things:
- HydraNets
- Dojo Processing Units
- Tesla bots
- So much more...
Here's a quick summary 🧵:
They introduced their single deep learning model architecture ("HydraNet") for feature extraction and transforming into a "vector space"
This includes multi-scale features from each of the 8 cameras, integrated with a transformer to attend to important features, incorporating kinematic features, processing in a spatiotemporal manner using a feature queue and spatial RNNs, all trained multi-task learning.
I find it very interesting that Twitter recommends relevant tweets to me, but the topic suggestion is completely off. It looks to me like the recommendation and topic selection algorithm are completely different.
While the tweet recommendation algo is more sophisticated that likely takes into consideration the semantic content of the tweet, the topic selection algo seems to be a simple algorithm that heavily weighs the presence of keywords.
Saw few tweets on pigeon-based classification of breast cancer (@tunguz@hardmaru, @Dominic1King, & ML Reddit), which was published in 2015. I work with the legend himself @rml52! I thought for my 1st Twitter thread I'd go over the papers's main points & our current work! (1/11)
My PI often likes to say AI stands for avian intelligence. And indeed his paper shows pigeons can learn the difficult task of classifying the presence of breast cancer in histopathological images. (2/11)
The pigeons were placed in an apparatus and the 🔬 image was shown to the pigeons on a touchscreen. The pigeons were given food if they pressed the correct button on the screen. (This is opposed to regular pathologists who are not given free food when analyzing images!) (3/11)