A short lesson on object tracking ๐Ÿง‘๐Ÿปโ€๐Ÿซ

Look at this video from a @Tesla Model 3 driving on the highway. The display shows multiple traffic lights coming out of the truck in front towards the car. What's going on? ๐Ÿค”

This is a typical case of a ๐˜๐—ฟ๐—ฎ๐—ฐ๐—ธ ๐—น๐—ผ๐˜€๐˜€!

Thread ๐Ÿ‘‡
The problem ๐Ÿค”

The truck in front carries 3 real traffic lights. The problem is that the computer vision system on the Tesla assumes that traffic lights are static (which is a good assumption in general ๐Ÿ˜„). In this case, though, the traffic lights are moving at 120 km/h...

๐Ÿ‘‡
Object detection ๐Ÿšฆ

A typical object detection system takes a single camera frame and detects all kinds of objects in it.

One of the best models for object detection is YOLO. I just ran this image through it and sure enough, it detects 2 of the traffic lights!

๐Ÿ‘‡
If you want to learn more about how deep learning based object detection works, check out this thread.



๐Ÿ‘‡
Object tracking ๐Ÿ“

We now want to track an object over multiple frames. In this way, we can more accurately determine the position of the object.

Given the detections in two frames, we need to associate each object in the first frame to an object in the second frame.

๐Ÿ‘‡
Static object assumption ๐Ÿšฆ

For traffic lights, it is reasonable to assume they are static. Therefore, the change of its position in the second frame only depends on the speed of our own car.

Since we know it, we can predict where the light will be in the next frame.

๐Ÿ‘‡
Track loss ๐Ÿคš

The problem is that these ๐Ÿšฆ are actually moving, so the prediction is wrong. The Tesla cannot associate them anymore, so:

1๏ธโƒฃ The light from the previous frame is counted as lost
2๏ธโƒฃ The light from this frame is regarded as a new object seen for the first time

๐Ÿ‘‡
Tracking ๐Ÿš—

What happens to the lost object? The Tesla assumes the light can't be detected for some reason. Instead, It shows on the display the expected position of the light based on the vehicle speed.

That's why the lights come "flying" towards the car in the display.
As you can see in the video, this happens all the time. This is what happens when we make assumptions about the real world, that fail in some situations... ๐Ÿคทโ€โ™‚๏ธ

This video was first posted on Reddit here:
reddit.com/r/teslamotors/โ€ฆ
H/t to @JBiserkov for showing me this Reddit post ๐Ÿ˜‰

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

18 May
This is Karma. Karma is not a machine learning classifier ๐Ÿ•โ€๐Ÿฆบ

Karma is a real dog trained to detect drugs. However, he would fail the simplest tests we apply in ML...

Let me take you through this story from the eyes of an ML engineer.

reason.com/2021/05/13/theโ€ฆ

Thread ๐Ÿงต
Story TLDR ๐Ÿ”–

The story is about police dogs trained to sniff drugs. The problem is that the dogs often signal drugs even if there are none. Then innocent people land in jail for days.

The cops even joke about the โ€œprobable cause on four legsโ€.

Let's see why is that ๐Ÿ‘‡
1. Sampling Bias ๐Ÿค

Drugs were found in 64% of the cars Karma identified, which was praised by the police as very good. In the end, most people don't carry drugs in their cars, so 64% seems solid.

There was a sampling problem though... ๐Ÿ‘‡
Read 9 tweets
22 Apr
What is a self-driving car engineer? ๐Ÿง‘โ€๐Ÿ’ป ๐Ÿง  ๐Ÿš™

It's not a single job description - there are many roles in a self-driving project!

๐Ÿง  Machine Learning
๐Ÿ‘€ Computer Vision
๐Ÿ’ฝ Big Data
๐Ÿ•น๏ธ Simulation
โœ… Test and Validation
๐Ÿฆบ Safety
๐Ÿ’ป Software Development

Read more below ๐Ÿ‘‡
๐Ÿง  Machine Learning Engineer
๐Ÿ‘€ Computer Vision Engineer
Read 9 tweets
21 Apr
Computer vision for self-driving cars ๐Ÿง  ๐Ÿš™

There are different computer vision problems you need to solve in a self-driving car.

โ–ช๏ธ Object detection
โ–ช๏ธ Lane detection
โ–ช๏ธ Drivable space detection
โ–ช๏ธ Semantic segmentation
โ–ช๏ธ Depth estimation
โ–ช๏ธ Visual odometry

Details ๐Ÿ‘‡
Object Detection ๐Ÿš—๐Ÿšถโ€โ™‚๏ธ๐Ÿšฆ๐Ÿ›‘

One of the most fundamental tasks - we need to know where other cars and people are, what signs, traffic lights and road markings need to be considered. Objects are identified by 2D or 3D bounding boxes.

Relevant methods: R-CNN, Fast(er) R-CNN, YOLO
Distance Estimation ๐Ÿ“

After you know what objects are present and where they are in the image, you need to know where they are in the 3D world.

Since the camera is a 2D sensor you need to first estimate the distance to the objects.

Relevant methods: Kalman Filter, Deep SORT
Read 11 tweets
20 Apr
Interesting results from the small experiment... ๐Ÿ˜„

This was actually a study reported in a Nature paper. Most people offer additive solutions (adding bricks) instead of substractive solutions (removing the pillar).

More details ๐Ÿ‘‡

In this example, the most elegant solution is to remove the pillar completely and let the roof lie on the block. It will be simpler, more stable and won't cost anything.

Some people quickly dismiss this option assuming this is not allowed, but it actualy is ๐Ÿ˜ƒ
This isn't because people don't recognize the value, but because many don't consider the substractive solution at all. Me included ๐Ÿ™‹โ€โ™‚๏ธ

The paper shows that this happens a lot in real life, especially in regulation. People tend to add new rules, instead of removing old ones.
Read 5 tweets
20 Apr
Sensors for self-driving cars ๐ŸŽฅ ๐Ÿง  ๐Ÿš™

There are 3 main sensors types used in self-driving cars for environment perception:
โ–ช๏ธ Camera
โ–ช๏ธ Radar
โ–ช๏ธ Lidar

They all have different advantages and disadvantages. Read below to learn more about them.

Thread ๐Ÿ‘‡
1๏ธโƒฃ Camera

The camera is arguably the most important sensor - the camera images contain the most information compared to the other sensors.

Modern cars across all self-driving levels have many cameras for a 360ยฐ coverage:
โ–ช๏ธ BMW 8 series - 7
โ–ช๏ธ Tesla - 8
โ–ช๏ธ Waymo - 29
This is an example from Tesla of what a typical camera sees and detects in the scene. Videos from other companies look very similar.

Read 9 tweets
19 Apr
End-to-end approach to self-driving ๐ŸŽฅ ๐Ÿ•ธ๏ธ ๐Ÿ•น๏ธ

I recently wrote about the classical software architecture for a self-driving car. The end-to-end approach is an interesting alternative.

The idea is to go directly from images to the control commands.

Let me tell you more... ๐Ÿ‘‡ Image
This approach is actually very old, dating back to 1989 and the ALVINN model by CMU. It is a 3-layer neural network using camera images and a laser range finder.

Again, this was back in 1989... ๐Ÿคฏ

papers.nips.cc/paper/1988/filโ€ฆ Image
A modern example is Nvidia's PilotNet - a Convolutional Neural Network with 250M parameters, which takes as input the raw camera image and predicts directly the steering angle of the car.

No explicit lane boundary or freespace detection needed!

arxiv.org/abs/1604.07316 Image
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(