End-to-end approach to self-driving π₯ πΈοΈ πΉοΈ
I recently wrote about the classical software architecture for a self-driving car. The end-to-end approach is an interesting alternative.
The idea is to go directly from images to the control commands.
Let me tell you more... π
This approach is actually very old, dating back to 1989 and the ALVINN model by CMU. It is a 3-layer neural network using camera images and a laser range finder.
A modern example is Nvidia's PilotNet - a Convolutional Neural Network with 250M parameters, which takes as input the raw camera image and predicts directly the steering angle of the car.
No explicit lane boundary or freespace detection needed!
Easy! When a human drives the car, we can record the camera images and the actual steering angle as ground truth π€·ββοΈ
The network will then learn to predict a steering angle similar to what the human driver chose - this is called immitation learning.
Take a look at this video to see the system in action. Around 8 minutes you can see what the network actually "sees".
Now, to be clear - this is not really a full self-driving car, but only a model that does the steering! There is much more you need to do to actually let the car drive by itself:
βͺοΈ Longitudinal control (acceleration and breaking)
βͺοΈ Lane changes
βͺοΈ Emergency maneuvers
A similar approach is implemented by Comma AI in their newest version of Openpilot.
They train a version of EfficientNet combined with a network that predicts the trajectory the car needs to drive.
The advantage of end-to-end networks is that you can get small and efficient models. The net will only focus on solving the final task and no intermediate representations.
Collecting of training data is fairly easy as well - no manual labeling!
The bad β
The disadvantage is that it will be very difficult to understand why the net makes some mistakes - we don't have any explicit intermediate representations, like lane boundaries.
It will also need lots of data to cover all possible scenarios on the road.
There is an interesting paper by Prof. Shashua, CEO of Mobileye, arguing that in order to reach very high accuracy, end-to-end methods will require exponentially more training samples, than the more modular approaches.
There are now approaches that try to combine the advantages of both the modular and the end-to-end approaches. I recommend watching this great talk by Prof. Raquel Urtasun from Uber ATG.
Read more about the classical software architecture for self-driving cars in my other thread:
There are different computer vision problems you need to solve in a self-driving car.
βͺοΈ Object detection
βͺοΈ Lane detection
βͺοΈ Drivable space detection
βͺοΈ Semantic segmentation
βͺοΈ Depth estimation
βͺοΈ Visual odometry
Details π
Object Detection ππΆββοΈπ¦π
One of the most fundamental tasks - we need to know where other cars and people are, what signs, traffic lights and road markings need to be considered. Objects are identified by 2D or 3D bounding boxes.
Relevant methods: R-CNN, Fast(er) R-CNN, YOLO
Distance Estimation π
After you know what objects are present and where they are in the image, you need to know where they are in the 3D world.
Since the camera is a 2D sensor you need to first estimate the distance to the objects.
Interesting results from the small experiment... π
This was actually a study reported in a Nature paper. Most people offer additive solutions (adding bricks) instead of substractive solutions (removing the pillar).
In this example, the most elegant solution is to remove the pillar completely and let the roof lie on the block. It will be simpler, more stable and won't cost anything.
Some people quickly dismiss this option assuming this is not allowed, but it actualy is π
This isn't because people don't recognize the value, but because many don't consider the substractive solution at all. Me included πββοΈ
The paper shows that this happens a lot in real life, especially in regulation. People tend to add new rules, instead of removing old ones.
Open-Source Self-Driving Car Simulators πΉοΈ π
You want to play around with self-driving car software and gather some experience? Check out these open-source self-driving car simulators!
Details below π
CARLA
CARLA is a great software developed by Intel. You can use it to work on any step of the pipeline, model different sensors, maps, traffic. It also integrates with ROS.
Another great simulator by Voyage - the self-driving company that was recently acuired by Cruise. It is built on the Unreal Engine and supports lots of features.
Useful online courses on self-driving cars π§ π
Here is a list of useful courses if you want to learn about software for self-driving cars.
Some of the courses are paid, but all platforms offer regular discounts and financial aids if you can't affor them.
Thread π
Udacity Self-Driving Car Nanodegree
This program offers hands on experience on all kind of relevant topics like perception, localization, planning and control. It takes a lot of time, but it is worth it.