Probabilistic Camera-to-LiDAR Map Registration for Autonomous Vehicle Localization
Published:
Description
This project aims to enhance a deep-learning-based system capable of localizing a vehicle within a LiDAR-generated map using images captured by onboard cameras. The method builds upon the work introduced by Daniele Cattaneo in CMRNet: Camera to LiDAR-Map Registration and further expanded in subsequent research. The system relies on deep neural networks to establish correspondences between LiDAR-generated point cloud data and camera-captured images. However, the current approach works as a black box, lacking explicit information about the reliability of these correspondences.
Objective
This project proposes integrating uncertainty estimation into the localization process, leveraging probabilistic methods explored by Matteo Vaghi (Paper 1, Paper 2). The goal is to develop a probabilistic version of CMRNet, enhancing robustness and reliability in real-world autonomous driving scenarios.
Research Scope & Methodology
- Deep Learning-based Localization using Camera-to-LiDAR Map Registration.
- Uncertainty Quantification** in neural network-based perception models.
- Evaluation of probabilistic approaches within the existing CMRNet framework.
- Collaboration with research teams at Universidad de Alcal谩 (UAH) 馃嚜馃嚫 and Universit脿 di Milano-Bicocca (UNIMIB) 馃嚠馃嚬.
Suitability & Collaboration
This project is ideal for Master鈥檚 Thesis (TFM) candidates, offering hands-on experience in deep learning, sensor fusion, and uncertainty estimation. The research will be conducted in collaboration with the authors of the referenced papers and both UAH and UNIMIB, providing an opportunity to contribute to an international research initiative.