SemAlign: Annotation-Free Camera-LiDAR Calibration with Semantic Alignment Loss

Zhijian Liu*, Haotian Tang*, Sibo Zhu*, Song Han
Massachusetts Institute of Technology (MIT)
(* indicates equal contributions)

Abstract

Multi-sensor solution has been widely adopted in real-world robotics systems (e.g., self-driving vehicles) due to its better robustness. However, its performance is highly dependent on the accurate calibration between different sensors, which is very time-consuming (i.e., hours of human efforts) to acquire. Recent learning-based solutions partially address this yet still require costly ground-truth annotations as supervision. In this paper, we introduce a novel self-supervised semantic alignment loss to quantitatively measure the quality of a given calibration. It is well correlated with conventional evaluation metrics while it does not require ground-truth calibration annotations as the reference. Based on this loss, we further propose an annotation-free optimization-based calibration algorithm (SemAlign) that first estimates a coarse calibration with loss-guided initialization and then refines it with gradient-based optimization. SemAlign reduces the calibration time from hours of human efforts to only seconds of GPU computation. It not only achieves comparable performance with existing supervised learning frameworks but also demonstrates a much better generalization capability when transferred to a different dataset.

Citation

@inproceedings{liu2021semalign,
  title={SemAlign: Annotation-Free Camera-LiDAR Calibration with Semantic Alignment Loss},
  author={Liu, Zhijian and Tang, Haotian and Zhu, Sibo and Han, Song},
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2021}
}

Acknowledgments: We thank Ji Lin for helpful discussions. This research is supported by NVIDIA, Samsung, Hyundai Motors and NSF.