The Tesla team discussed how they are using AI to crack Full Self Driving (FSD) at their Tesla AI Day event.
They introduced many cool things:
- HydraNets
- Dojo Processing Units
- Tesla bots
- So much more...
Here's a quick summary 🧵:
They introduced their single deep learning model architecture ("HydraNet") for feature extraction and transforming into a "vector space"
This includes multi-scale features from each of the 8 cameras, integrated with a transformer to attend to important features, incorporating kinematic features, processing in a spatiotemporal manner using a feature queue and spatial RNNs, all trained multi-task learning.
Planning and control of the car utilizes reinforcement learning-based approaches
Here is the entire pipeline put together:
Next, they discussed their labeling pipeline‚ which is all done in-house. The way they do this is by labeling directly in this "vector space"
They also use simulations to provide additional data:
The Tesla team also discussed their Dojo supercomputers, which are specialized supercomputers for machine learning!
While the hardware is still in development, they are developing exaflop-level servers!
Then, out of nowhere, @elonmusk introduces Tesla Bot!
The Tesla AI team is designing and building a humanoid bot to perform repetitive tasks, using the AI algorithms originally used to develop FSD.
I have only covered the tip of the iceberg!
Check out the recording here:
A quick clarification: the RL-based techniques are not used in production for planning and control yet but they are exploring it currently... Nonetheless, it is still very exciting and interesting!
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.