A lot of the technology behind the FSD software was presented at a conference in February. Check out this video, it is really interesting!
Main point is Tesla is using a "Bird's eye view" neural network to predict the layout of the road.
You can see this in action in the car - you see all lines defining the lanes and the road boundaries in the display. They seem to be coming from the vision system and not from a map, because they are very unstable and often not very accurate.
The good π
Looking at videos, the system seems to work very well in most cases: following cars, stopping at stop signs and traffic lights, taking turns and even driving around wrongly parked vehicles. Check out this video to see an example of the last:
The bad π
However, the lack of (HD) map data combined with weaknesses of the vision system is apparent in many cases.
For example, check out how it has difficulties taking a right turn, because it doesn't "see" the geometry of the road correctly.
There are also many examples with wrong left turns, where the Tesla tries to enter the lane for the oncoming traffic:
-
-
-
And here, it classifies the left lane wrongly as oncoming even though it is going in the same direction. Check how the classification of the left lane boundary is toggling between white and yellow:
The ugly π«
It seems to also have problems with detecting/predicting cross traffic in some cases, which is very dangerous:
-
-
Conclusion π
Overall, it seems to be very early days of the system and the driver needs to be ready to take control all the time, when it does something crazy and dangerous.
This visualization is amazing, though - it's almost like the dev debug display that others have... π
Maps π
It seems that Tesla is currently not using HD map data. They have some map, though - you can see the car stopping for traffic lights/stop signs when they are too far away or behind a corner.
I would be surprised, though, if Tesla isn't building some kind of "sparse" HD map, containing things like lanes and road geometry and semantics, positions of traffic lights and stop signs etc.
The system does mistakes, but the drivers are regularly using the report button to send Tesla feedback when this happens!
You can debate if it is safe to release such an early version (even as limited beta), but the data they get is extremely valuable! π
Further reading π
Check out these links if you are interested:
- All the videos that @brandonee916 is uploading with him testing the FSD software.
- Interesting review of the system by @olivercameron:
Self-driving car engineer roles - Computer Vision Engineer π
The camera is one of the most important sensors! It is not always the most accurate one, but it can provide much more data than a lidar or radar. Extracting this data is the job of the CV engineer.
Thread π
Problems to work on π€
Here some typical object classes that need to be detected and classified.
π Traffic signs
π¦ Traffic lights
π Vehicles
πΆ Pedestrians
π¦ Animals
π£οΈ Lane markings
ποΈ Landmarks
π§ Construction zones
𧱠Obstacles
π Police cars
Distance estimation π
Detecting an object is not enough, though. You also want to know how far the object is from the car. While the detection part is dominated by deep learning, the traditional CV methods (e.g. Kalman Filter) are still very useful for distance estimation.
Self-driving car engineer roles - Software Engineer π»
There are many specialized roles in a self-driving car project, like ML or CV engineers. However, every projects needs lots of good software devs - you can enter the industry even without specific knowledge!
Thread π
Problems to work on π€
Some problems that software developers work on to build a self-driving car (the list is not exhaustive):
- HMI
- Operating system
- Logging and tracing
- Communication between ECUs
- Internal frameworks and libraries
- Implementing diagnostic interfaces
Software engineers also work closely with many of the more specialized roles.
For example with Machine Learning engineers to implement models on the ECU or with Vehicle Control engineers to get their algos working efficiently.
Some interesting self-driving news lately:
- Waymo launching test fleet without safety driver
- Tesla launching a beta of their Full Self-Driving
- Mercedes announcing a level 3 traffic jam pilot for 2021
There are 3 very different approaches π
1οΈβ£ "Everything that fits" approach.
This is Waymo's approach, but other companies like Cruise, Argo, Aurora, Uber, Zoox have a similar strategy.
Fit as much sensors as possible on the car, build high-definition maps of the environment and throw in lots of compute power.
Check out some images of these cars - they all have multiple lidars, cameras and radars all around the car. Waymo now has 29 cameras! π²
They are not really integrated in a consumer oriented way, but it should be fine for a robotaxi.
How to become a self-driving car engineer? π»π§ π
Well, there is no easy answer - building a self-driving car requires expertise from many different fields.
The good new is that there are many paths that will lead you to a job in this industry!
Read more details below π
There are many different engineering roles in a self-driving car project:
π» Software Development
π§ Machine Learning
π Computer Vision
ποΈ Vehicle Control
βοΈ Hardware
π½ Big Data
πΉοΈ Simulation
π Mapping
π§ Safety
π Security
π Test and Validation
π¨ Tooling
and more... π
Over the next days I'll share some of my experience working in the industry and describe in more details the different roles and what skills are required to enter the field.