This is Karma. Karma is not a machine learning classifier πŸ•β€πŸ¦Ί

Karma is a real dog trained to detect drugs. However, he would fail the simplest tests we apply in ML...

Let me take you through this story from the eyes of an ML engineer.

reason.com/2021/05/13/the…

Thread 🧡
Story TLDR πŸ”–

The story is about police dogs trained to sniff drugs. The problem is that the dogs often signal drugs even if there are none. Then innocent people land in jail for days.

The cops even joke about the β€œprobable cause on four legs”.

Let's see why is that πŸ‘‡
1. Sampling Bias 🀏

Drugs were found in 64% of the cars Karma identified, which was praised by the police as very good. In the end, most people don't carry drugs in their cars, so 64% seems solid.

There was a sampling problem though... πŸ‘‡
The cars were not sampled at random! The police only did the sniff test if there was a serious suspicion that something is wrong.

The chance there are drugs in the car is much higher in this case!
2. Evaluation Metrics πŸ”

The police referred to a 2014 study from Poland measuring the efficacy of sniffer dogs. The problem was that every test actually contained drugs!

This means there was no chance to measure false positives from the dogs! Only recall, not precision πŸ€¦β€β™‚οΈ
3. Leaking Training Data 🚰

Another study found that the dogs learned to recognize the emotions of their handlers during tests. They felt that their human wanted them to find drugs in the specific test scenario, so they did.

The trainer leaked the ground truth during testing.
4. Overfitting ➿

Similar to the one above, in many cases, the dog saw that their handler wanted to find drugs in a car during a traffic stop. So it would raise an alarm.

The dog was rewarded before the car was actually searched! It found an easy signal giving it a reward.
Summary 🏁

It is fascinating how many problems there are with the sniffer dogs that are well known to machine learning engineers (and of course mathematicians). Some of them are even common sense...

Avoid these problems not only when training your model, but also in life πŸ˜ƒ
If you liked this thread and want to read more about self-driving cars and machine learning follow me @haltakov!

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

22 Apr
What is a self-driving car engineer? πŸ§‘β€πŸ’» 🧠 πŸš™

It's not a single job description - there are many roles in a self-driving project!

🧠 Machine Learning
πŸ‘€ Computer Vision
πŸ’½ Big Data
πŸ•ΉοΈ Simulation
βœ… Test and Validation
🦺 Safety
πŸ’» Software Development

Read more below πŸ‘‡
🧠 Machine Learning Engineer
πŸ‘€ Computer Vision Engineer
Read 9 tweets
21 Apr
Computer vision for self-driving cars 🧠 πŸš™

There are different computer vision problems you need to solve in a self-driving car.

β–ͺ️ Object detection
β–ͺ️ Lane detection
β–ͺ️ Drivable space detection
β–ͺ️ Semantic segmentation
β–ͺ️ Depth estimation
β–ͺ️ Visual odometry

Details πŸ‘‡
Object Detection πŸš—πŸšΆβ€β™‚οΈπŸš¦πŸ›‘

One of the most fundamental tasks - we need to know where other cars and people are, what signs, traffic lights and road markings need to be considered. Objects are identified by 2D or 3D bounding boxes.

Relevant methods: R-CNN, Fast(er) R-CNN, YOLO
Distance Estimation πŸ“

After you know what objects are present and where they are in the image, you need to know where they are in the 3D world.

Since the camera is a 2D sensor you need to first estimate the distance to the objects.

Relevant methods: Kalman Filter, Deep SORT
Read 11 tweets
20 Apr
Interesting results from the small experiment... πŸ˜„

This was actually a study reported in a Nature paper. Most people offer additive solutions (adding bricks) instead of substractive solutions (removing the pillar).

More details πŸ‘‡

In this example, the most elegant solution is to remove the pillar completely and let the roof lie on the block. It will be simpler, more stable and won't cost anything.

Some people quickly dismiss this option assuming this is not allowed, but it actualy is πŸ˜ƒ
This isn't because people don't recognize the value, but because many don't consider the substractive solution at all. Me included πŸ™‹β€β™‚οΈ

The paper shows that this happens a lot in real life, especially in regulation. People tend to add new rules, instead of removing old ones.
Read 5 tweets
20 Apr
Sensors for self-driving cars πŸŽ₯ 🧠 πŸš™

There are 3 main sensors types used in self-driving cars for environment perception:
β–ͺ️ Camera
β–ͺ️ Radar
β–ͺ️ Lidar

They all have different advantages and disadvantages. Read below to learn more about them.

Thread πŸ‘‡
1️⃣ Camera

The camera is arguably the most important sensor - the camera images contain the most information compared to the other sensors.

Modern cars across all self-driving levels have many cameras for a 360Β° coverage:
β–ͺ️ BMW 8 series - 7
β–ͺ️ Tesla - 8
β–ͺ️ Waymo - 29
This is an example from Tesla of what a typical camera sees and detects in the scene. Videos from other companies look very similar.

Read 9 tweets
19 Apr
End-to-end approach to self-driving πŸŽ₯ πŸ•ΈοΈ πŸ•ΉοΈ

I recently wrote about the classical software architecture for a self-driving car. The end-to-end approach is an interesting alternative.

The idea is to go directly from images to the control commands.

Let me tell you more... πŸ‘‡ Image
This approach is actually very old, dating back to 1989 and the ALVINN model by CMU. It is a 3-layer neural network using camera images and a laser range finder.

Again, this was back in 1989... 🀯

papers.nips.cc/paper/1988/fil… Image
A modern example is Nvidia's PilotNet - a Convolutional Neural Network with 250M parameters, which takes as input the raw camera image and predicts directly the steering angle of the car.

No explicit lane boundary or freespace detection needed!

arxiv.org/abs/1604.07316 Image
Read 13 tweets
15 Apr
Open-Source Self-Driving Car Simulators πŸ•ΉοΈ πŸš™

You want to play around with self-driving car software and gather some experience? Check out these open-source self-driving car simulators!

Details below πŸ‘‡
CARLA

CARLA is a great software developed by Intel. You can use it to work on any step of the pipeline, model different sensors, maps, traffic. It also integrates with ROS.

carla.org
Deepdive from Voyage

Another great simulator by Voyage - the self-driving company that was recently acuired by Cruise. It is built on the Unreal Engine and supports lots of features.

deepdrive.voyage.auto
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(