Curious what Tesla means by upreving their static obstacle neural nets?
Lets see how the Tesla FSD Beta 10.5 3D Voxel nets compare to the nets from two months ago.
The new captures are from the same area as the old ones so we can directly compare the outputs
1/N
This first example is a small pedestrian crosswalk sign in the middle of the road. It's about 1 foot wide so it should show up as 1 pixel in the nets.
Under the old nets it shows up as a large blob with an incorrect depth. Under the new nets it's much better.
Under the old nets the posts show up a huge blobs and disappears when the car gets close to it. The probabilities seem fairly consistent no matter how far they sign is away even though up close they should be more confident.
I don't have an old capture of these cones to compare but they seem plenty distinct and good enough to drive on. The predictions are stable next to and behind the car
Of note is that the cones merge together into a single line further way from the car
Since I suspect this training data is automatically generated, the repetitive colors and patterns of the cones may be confusing their offline algorithm into thinking it's one solid object.
I'm not sure it matters for driving given their close spacing
Here's an example of making a right turn into a narrow road with hard curbs on both sides. The car is outputting pretty accurate estimates of the curbs through the tight corner
Seems like my added car model is a tad too far forward compared to reality
It's pretty clear that there's been some big though incremental improvements to these nets.
The recent FSD patch notes mentioning the increased number of clips as well as improvements to the training data generation (autolabeler?) seems to be paying off
It's not clear if this is part of the autolabeler given there's no real labels here but if it's a shared system for managing the clips it may benefit both
I'm also curious if there's been any model architectural changes helping though the patch notes haven't mentioned that
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@aelluswamy's talk at CPVR has a lot of very impressive improvements to Tesla's 3D voxel models. There's some subtle but very important things in the slides that I'm excited to incorporate into my own models. ⬇️
1) Image positional encoding: This adds in an x/y position encoding to each of the image space features. This should make it easier for the transformer to go from image space to 3D
It seems like a hybrid between a traditional CNN and ViT
ViT uses patches of the images encoded with a position before feeding them through a transformer. Using a position encoding with a traditional CNN seems like a nice balance of efficiency and likely makes the per camera encoder simpler
Curious what I've been up to in the past 6 months? 😅
I've been working on a novel approach to depth and occupancy understanding for my FSD models!
It's much simpler than existing techniques and directly learns the 3D representation ⬇️
I posted the full write up on my about a month ago and I've had a number of PhD students, companies and labs ask to collaborate on papers/projects so I think it's state of the art 🙂
In my last post I was doing a multi-stage pipeline to train the models:
1) train an image space depth model from the main camera 2) generated a point cloud from an entire video 3) convert to cubes 4) train a voxel model using multiple cameras
When looking at this data there's two main things to consider: the static world around the vehicle and the dynamic objects in the scene such as cars or people
For static objects information from the forward facing cameras can compensate for lack of info on the repeaters
Here's a static scene in low light. With the blinker off the curb is too dark to see. The blinker actually helps since it provides light
The nearby signs and the further away barriers are mostly washed out but since they're static they can be remembered
Most of the critical FSD bits are missing in the normal firmware. These outputs aren't normally running but with some tricks we can enable it.
This seems to be the general solution to handling unpredictable scenarios such as the Seattle monorail pillars or overhanging shrubbery.
The nets predict the location of static objects in the space around them via a dense grid of probabilities.
The output is a 384x255x12 dense grid of probabilities. Each cube seems to be ~0.33 meters and currently outputs predictions ~100 meters in front of the vehicle.
We recently got some insight into how Tesla is going to replace radar in the recent firmware updates + some nifty ML model techniques
⬇️ Thread
From the binaries we can see that they've added velocity and acceleration outputs. These predictions in addition to the existing xyz outputs give much of the same information that radar traditionally provides
(distance + velocity + acceleration).
For autosteer on city streets, you need to know the velocity and acceleration of cars in all directions but radar is only pointing forward. If it's accurate enough to make a left turn, radar is probably unnecessary for the most part.