Efficient and Robust
LiDAR-Based
End-to-End Navigation
Paper πππ
Paper: arxiv.org/pdf/2105.09932
Credit: Zhijian Liu, Alexander Amini, Sibo Zhu, Sertac Karaman, Song Han, Daniela Rus
Deep learning has been used to demonstrate end-to-end neural network learning for autonomous vehicle control from raw sensory input.
While LiDAR sensors provide reliably accurate information, existing end-to-end driving solutions are mainly based on cameras since processing 3D data requires a large memory footprint and computation cost.
On the other hand, increasing the robustness of these systems is also critical; however, even estimating the model's uncertainty is very challenging due to the cost of sampling-based methods.
In this paper, they present an efficient and robust LiDAR-based end-to-end navigation framework.
They first introduce Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
They then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass and then fuses the control predictions intelligently.
They evaluate their system on a full-scale vehicle and demonstrate lane-stable as well as navigation capabilities.
In the presence of out-of-distribution events (e.g., sensor failures), their system significantly improves robustness and reduces the number of takeovers in the real world.
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
Lidar with Velocity:
Motion Distortion Correction of Point Clouds
from Oscillating Scanning Lidars
In this paper, Gaussian-based lidar and camera fusion is proposed to estimate the full velocity and correct the lidar distortion.
Lidar point cloud distortion from moving object is an important problem in autonomous driving, and recently becomes even more demanding with the emerging of newer lidars, which feature back-and-forth scanning patterns.
Accurately estimating moving object velocity would not only provide a tracking capability but also correct the point cloud distortion with more accurate description of the moving object.
SimpleTrack:
Understanding and Rethinking
3D Multi-object Tracking
3D multi-object tracking (MOT) has witnessed numerous novel benchmarks and approaches in recent years, especially those under the "tracking-by-detection" paradigm.
Despite their progress and usefulness, an in-depth analysis of their strengths and weaknesses is not yet available.
In this paper, they summarize current 3D MOT methods into a unified framework by decomposing them into four constituent parts: pre-processing of detection, association, motion model, and life cycle management.
Their sub-brands include Future Hub, open innovation accelerator solely focusing on sustainability.
The program brings together Baltic market-leading enterprises with top EU impact tech startups to facilitate partnerships for a 2-month co-creation process of a pilot project.