The story is about police dogs trained to sniff drugs. The problem is that the dogs often signal drugs even if there are none. Then innocent people land in jail for days.
The cops even joke about the βprobable cause on four legsβ.
Let's see why is that π
1. Sampling Bias π€
Drugs were found in 64% of the cars Karma identified, which was praised by the police as very good. In the end, most people don't carry drugs in their cars, so 64% seems solid.
There was a sampling problem though... π
The cars were not sampled at random! The police only did the sniff test if there was a serious suspicion that something is wrong.
The chance there are drugs in the car is much higher in this case!
2. Evaluation Metrics π
The police referred to a 2014 study from Poland measuring the efficacy of sniffer dogs. The problem was that every test actually contained drugs!
This means there was no chance to measure false positives from the dogs! Only recall, not precision π€¦ββοΈ
3. Leaking Training Data π°
Another study found that the dogs learned to recognize the emotions of their handlers during tests. They felt that their human wanted them to find drugs in the specific test scenario, so they did.
The trainer leaked the ground truth during testing.
4. Overfitting βΏ
Similar to the one above, in many cases, the dog saw that their handler wanted to find drugs in a car during a traffic stop. So it would raise an alarm.
The dog was rewarded before the car was actually searched! It found an easy signal giving it a reward.
Summary π
It is fascinating how many problems there are with the sniffer dogs that are well known to machine learning engineers (and of course mathematicians). Some of them are even common sense...
Avoid these problems not only when training your model, but also in life π
If you liked this thread and want to read more about self-driving cars and machine learning follow me @haltakov!
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
There are different computer vision problems you need to solve in a self-driving car.
βͺοΈ Object detection
βͺοΈ Lane detection
βͺοΈ Drivable space detection
βͺοΈ Semantic segmentation
βͺοΈ Depth estimation
βͺοΈ Visual odometry
Details π
Object Detection ππΆββοΈπ¦π
One of the most fundamental tasks - we need to know where other cars and people are, what signs, traffic lights and road markings need to be considered. Objects are identified by 2D or 3D bounding boxes.
Relevant methods: R-CNN, Fast(er) R-CNN, YOLO
Distance Estimation π
After you know what objects are present and where they are in the image, you need to know where they are in the 3D world.
Since the camera is a 2D sensor you need to first estimate the distance to the objects.
Interesting results from the small experiment... π
This was actually a study reported in a Nature paper. Most people offer additive solutions (adding bricks) instead of substractive solutions (removing the pillar).
In this example, the most elegant solution is to remove the pillar completely and let the roof lie on the block. It will be simpler, more stable and won't cost anything.
Some people quickly dismiss this option assuming this is not allowed, but it actualy is π
This isn't because people don't recognize the value, but because many don't consider the substractive solution at all. Me included πββοΈ
The paper shows that this happens a lot in real life, especially in regulation. People tend to add new rules, instead of removing old ones.
End-to-end approach to self-driving π₯ πΈοΈ πΉοΈ
I recently wrote about the classical software architecture for a self-driving car. The end-to-end approach is an interesting alternative.
The idea is to go directly from images to the control commands.
Let me tell you more... π
This approach is actually very old, dating back to 1989 and the ALVINN model by CMU. It is a 3-layer neural network using camera images and a laser range finder.
A modern example is Nvidia's PilotNet - a Convolutional Neural Network with 250M parameters, which takes as input the raw camera image and predicts directly the steering angle of the car.
No explicit lane boundary or freespace detection needed!
Open-Source Self-Driving Car Simulators πΉοΈ π
You want to play around with self-driving car software and gather some experience? Check out these open-source self-driving car simulators!
Details below π
CARLA
CARLA is a great software developed by Intel. You can use it to work on any step of the pipeline, model different sensors, maps, traffic. It also integrates with ROS.
Another great simulator by Voyage - the self-driving company that was recently acuired by Cruise. It is built on the Unreal Engine and supports lots of features.