Gradio is designed specifically for creating interfaces for machine learning models, and it takes ~5 min to get an app running!
It also includes useful features like interpretability, flagging unexpected model behavior, etc. It is therefore quite useful for providing more insight about your models
That said, Streamlit is a highly flexible and customizable python-based UI framework that also support third-party components that extend what's possible with Streamlit
Additionally, Streamlit includes useful features like caching that allows you to develop performant apps!
Therefore, in my opinion, I think it's worth learning how to use both tools to be able to create amazing demos for your machine learning projects.
If you found this thread useful, consider following me for more ML- and STEM-related content!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The Tesla team discussed how they are using AI to crack Full Self Driving (FSD) at their Tesla AI Day event.
They introduced many cool things:
- HydraNets
- Dojo Processing Units
- Tesla bots
- So much more...
Here's a quick summary 🧵:
They introduced their single deep learning model architecture ("HydraNet") for feature extraction and transforming into a "vector space"
This includes multi-scale features from each of the 8 cameras, integrated with a transformer to attend to important features, incorporating kinematic features, processing in a spatiotemporal manner using a feature queue and spatial RNNs, all trained multi-task learning.
I find it very interesting that Twitter recommends relevant tweets to me, but the topic suggestion is completely off. It looks to me like the recommendation and topic selection algorithm are completely different.
While the tweet recommendation algo is more sophisticated that likely takes into consideration the semantic content of the tweet, the topic selection algo seems to be a simple algorithm that heavily weighs the presence of keywords.
Saw few tweets on pigeon-based classification of breast cancer (@tunguz@hardmaru, @Dominic1King, & ML Reddit), which was published in 2015. I work with the legend himself @rml52! I thought for my 1st Twitter thread I'd go over the papers's main points & our current work! (1/11)
My PI often likes to say AI stands for avian intelligence. And indeed his paper shows pigeons can learn the difficult task of classifying the presence of breast cancer in histopathological images. (2/11)
The pigeons were placed in an apparatus and the 🔬 image was shown to the pigeons on a touchscreen. The pigeons were given food if they pressed the correct button on the screen. (This is opposed to regular pathologists who are not given free food when analyzing images!) (3/11)