Tristan Profile picture
Oct 11, 2021 15 tweets 7 min read Read on X
Tesla has added new voxel 3D birdseye view outputs and it's pretty amazing!

Nice of them to start merging some bits of FSD into the normal firmware in 2021.36 so we can play with the perception side 🙂

Thanks to @greentheonly for the help!
Most of the critical FSD bits are missing in the normal firmware. These outputs aren't normally running but with some tricks we can enable it.

This seems to be the general solution to handling unpredictable scenarios such as the Seattle monorail pillars or overhanging shrubbery.
The nets predict the location of static objects in the space around them via a dense grid of probabilities.

The output is a 384x255x12 dense grid of probabilities. Each cube seems to be ~0.33 meters and currently outputs predictions ~100 meters in front of the vehicle.
This is similar to the previous single camera depth model but given it the birdseye view treatment.

See our previous tweets on that at:
Here's an example of a post in the middle of a narrow road. fn.lc/s/depthrender/…

Before this Tesla would have to manually label this as part of the training set to ensure the car doesn't run into it
Here's a full intersection, outputs seem quite reasonable in all directions. You can see the 4 buildings on each side, the curbs ahead as well as the trees by the side of the road.

fn.lc/s/depthrender/…
Here's a hard median with a post that it correctly identifies. Adds another level of safety to ensure that the car doesn't drive over hard curbs.

This example highlights that the model ignores cares and only shows the static objects. fn.lc/s/depthrender/…
Pretty impressive how much detail it can capture of trees and the landscaping on the side of the road.

fn.lc/s/depthrender/…
Here's the view leaving a parking lot. Clearly distinguishes where the road is vs the T-bone style intersection.

fn.lc/s/depthrender/…
You can check out the raw data at fn.lc/s/depthrender/…

And the corresponding time synced video is at

The uploaded voxel frames are every half second for practicality reasons (in the car it's much higher FPS)
I suspect they're taking the same offline 3D models they use to label the birdseye view training data (as seen during AI day) and converting it to voxel data to train a net.

It's a very clever solution, kudos to the engineers who worked on this.
I'm very curious what the model architecture looks like and how much it differs from the other birdseye view nets.

The 3D convolutional NNs used here are similar to what could potentially be used merge radar with vision if Tesla can get access to the raw Conti radar data.
The 3D birdseye view is a fair bit lower resolution than LIDAR but very impressive and achieves much of the same purpose.
These models are outputting probabilities so you can see where the model is confident vs not.

I don't quite know what the scale is here but having a 75% threshold seems to work pretty well. For all these renders I only show voxels that are above the target threshold
Rendering voxel data in a browser is pretty tricky so if anyone wants to help with more advanced visualizations let me know :)

I see why Elon said they were having issues visualizing it

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tristan

Tristan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rice_fry

Aug 21, 2022
@aelluswamy's talk at CPVR has a lot of very impressive improvements to Tesla's 3D voxel models. There's some subtle but very important things in the slides that I'm excited to incorporate into my own models. ⬇️

1) Image positional encoding: This adds in an x/y position encoding to each of the image space features. This should make it easier for the transformer to go from image space to 3D

It seems like a hybrid between a traditional CNN and ViT image space positional encoding
ViT uses patches of the images encoded with a position before feeding them through a transformer. Using a position encoding with a traditional CNN seems like a nice balance of efficiency and likely makes the per camera encoder simpler The ViT architecture from https://arxiv.org/abs/2010.11929
Read 14 tweets
Jul 24, 2022
Curious what I've been up to in the past 6 months? 😅

I've been working on a novel approach to depth and occupancy understanding for my FSD models!

It's much simpler than existing techniques and directly learns the 3D representation ⬇️ Example output of my new te...
I posted the full write up on my about a month ago and I've had a number of PhD students, companies and labs ask to collaborate on papers/projects so I think it's state of the art 🙂

I haven't seen any papers on this

Full write up: fn.lc/post/voxel-sfm/
In my last post I was doing a multi-stage pipeline to train the models:

1) train an image space depth model from the main camera
2) generated a point cloud from an entire video
3) convert to cubes
4) train a voxel model using multiple cameras a camera and the per pixel ...Old voxel representation of...
Read 15 tweets
Mar 14, 2022
Is the Tesla repeater light bleed a problem? I grabbed some captures from a 2020 Model 3 to find out

Here's some of the raw 10-bit captures and my analysis ⬇️

Thanks to @greentheonly for suggesting this!
When looking at this data there's two main things to consider: the static world around the vehicle and the dynamic objects in the scene such as cars or people

For static objects information from the forward facing cameras can compensate for lack of info on the repeaters
Here's a static scene in low light. With the blinker off the curb is too dark to see. The blinker actually helps since it provides light

The nearby signs and the further away barriers are mostly washed out but since they're static they can be remembered
Read 15 tweets
Jan 14, 2022
I spent some time over my 2 week holiday creating my own self driving models from the ground up in PyTorch 🙂

Open source self driving anyone?

Check out the full write up at: fn.lc/post/diy-self-…

I'll be summarizing it below ⬇️ 1/n Generated voxels from point cloud representations on the lefA generated 3D point cloud view of WA-520 A residential street and the corresponding depth map for the
These were trained from the raw footage without any Tesla NNs or outputs. Makes it more fun this way and a lot more possible to iterate

I built everything here using just using 5 of the 8 cameras and the vehicle speed, steering wheel position and IMU readings
Early on I decided to focus on the models that wouldn't require me to label thousands of hours data but are still critical to self driving.

What made the most sense was to try and recreate the 3D generalized static object network previously shown at:
Read 16 tweets
Nov 24, 2021
Curious what Tesla means by upreving their static obstacle neural nets?

Lets see how the Tesla FSD Beta 10.5 3D Voxel nets compare to the nets from two months ago.

The new captures are from the same area as the old ones so we can directly compare the outputs

1/N
This first example is a small pedestrian crosswalk sign in the middle of the road. It's about 1 foot wide so it should show up as 1 pixel in the nets.

Under the old nets it shows up as a large blob with an incorrect depth. Under the new nets it's much better. crosswalk sign in the middle of streetcone under old nets is a large blobcone under new nets is correctly sized to reality
Under the old nets the posts show up a huge blobs and disappears when the car gets close to it. The probabilities seem fairly consistent no matter how far they sign is away even though up close they should be more confident.

fn.lc/s/depthrender/…
Read 11 tweets
Apr 12, 2021
We recently got some insight into how Tesla is going to replace radar in the recent firmware updates + some nifty ML model techniques

⬇️ Thread
From the binaries we can see that they've added velocity and acceleration outputs. These predictions in addition to the existing xyz outputs give much of the same information that radar traditionally provides
(distance + velocity + acceleration).
For autosteer on city streets, you need to know the velocity and acceleration of cars in all directions but radar is only pointing forward. If it's accurate enough to make a left turn, radar is probably unnecessary for the most part.
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(