1) I think the AP driver assist product is an extremely valuable feature; Human+AP already looks statistically safer than human alone & it makes long distance driving far less tiring.
In my view Tesla’s current 2D image-based Autopilot architecture is likely incapable of solving “pseudo lidar” to a sufficient level to allow Robotaxis. But I don’t think it was ever planned or expected to.
However, I think Tesla’s plan always involved using a new architecture to solve this problem – this was the key reason for developing the HW3 chip & the key project @Karpathy has been working on since he joined.
Since then I think @Karpathy has been working to perfect this architecture & take multi seconds video from all cameras to feed into a single NN.
With multi cameras, the neural net can do multiple combinations of parallax calculations to estimate object distance. With full video frames it has multiple reference points to use to calculate velocity.
(This has got rather long, so I’ll continue in a later thread).