A benchmark for point clouds registration algorithms
Published in arxiv, 2021
Recommended citation: FONTANA2021103734 https://www.sciencedirect.com/science/article/abs/pii/S0921889021000191
Point clouds registration is a fundamental step of many point clouds processing pipelines; however, most algorithms are tested on data that are collected ad-hoc and not shared with the research community. These data often cover only a very limited set of use cases; therefore, the results cannot be generalized. Public datasets proposed until now, taken individually, cover only a few kinds of environment and mostly a single sensor. For these reasons, we developed a benchmark, for localization and mapping applications, using multiple publicly available datasets. In this way, we are able to cover many kinds of environment and many kinds of sensor that can produce point clouds. Furthermore, the ground truth has been thoroughly inspected and evaluated to ensure its quality. For some of the datasets, the accuracy of the ground truth measuring system was not reported by the original authors, therefore we estimated it with our own novel method, based on an iterative registration algorithm. Along with the data, we provide a broad set of registration problems, chosen to cover different types of initial misalignment, various degrees of overlap, and different kinds of registration problems. Lastly, we propose a metric to measure the performances of registration algorithms: it combines the commonly used rotation and translation errors together, to allow an objective comparison of the alignments. This work aims at encouraging authors to use a public and shared benchmark, instead of data collected ad-hoc, to ensure objectivity and repeatability, two fundamental characteristics in any scientific field.
Preprint version available here
Recommended citation:
@article{FONTANA2021103734, title = {A benchmark for point clouds registration algorithms}, journal = {Robotics and Autonomous Systems}, volume = {140}, pages = {103734}, year = {2021}, issn = {0921-8890}, doi = {https://doi.org/10.1016/j.robot.2021.103734}, url = {https://www.sciencedirect.com/science/article/pii/S0921889021000191}, author = {Simone Fontana and Daniele Cattaneo and Augusto L. Ballardini and Matteo Vaghi and Domenico G. Sorrenti}, keywords = {Benchmark, Point clouds registration, Datasets}, abstract = {Point clouds registration is a fundamental step of many point clouds processing pipelines; however, most algorithms are tested on data that are collected ad-hoc and not shared with the research community. These data often cover only a very limited set of use cases; therefore, the results cannot be generalized. Public datasets proposed until now, taken individually, cover only a few kinds of environment and mostly a single sensor. For these reasons, we developed a benchmark, for localization and mapping applications, using multiple publicly available datasets. In this way, we are able to cover many kinds of environment and many kinds of sensor that can produce point clouds. Furthermore, the ground truth has been thoroughly inspected and evaluated to ensure its quality. For some of the datasets, the accuracy of the ground truth measuring system was not reported by the original authors, therefore we estimated it with our own novel method, based on an iterative registration algorithm. Along with the data, we provide a broad set of registration problems, chosen to cover different types of initial misalignment, various degrees of overlap, and different kinds of registration problems. Lastly, we propose a metric to measure the performances of registration algorithms: it combines the commonly used rotation and translation errors together, to allow an objective comparison of the alignments. This work aims at encouraging authors to use a public and shared benchmark, instead of data collected ad-hoc, to ensure objectivity and repeatability, two fundamental characteristics in any scientific field.} }