tightly-coupled

Tightly-Coupled LiDAR–Inertial Odometry with Geometric-Uncertainty Modeling
In this article, we propose a novel LiDAR-inertial-visual fusion framework, named Voxel-LIVO, which is capable of real-time dense map reconstruction while achieving accurate and robust state estimation. The framework tightly fuses measurements from three heterogeneous sensors via an IESKF and, through a unified hybrid-map strategy, maintains short-term, mid-term, and long-term data association. The system maintains high-precision localization, remains robust to LiDAR and/or visual degeneration, and keeps its memory footprint low. The improvement in system accuracy is attributed to the extraction of high-quality image patches, coupled with affine transformations of those patches guided by LiDAR planes, which markedly enhance image-alignment precision. Additionally, the system further optimizes the state using sequential LiDAR-visual BA. The improvement in system robustness is attributed to both the LiDAR and visual subsystems adopting direct methods, which can capture subtle changes in geometric and visual features. It combines multiple frames of LiDAR and camera within the window to strengthen data association, and projects the local point cloud map onto the image to counter the impact of LiDAR being in blind spots. Voxel-LIVO is tested on a wide range of public datasets and our private datasets, evaluating its performance in terms of localization accuracy, robustness, and point cloud map precision. The results show that Voxel-LIVO achieves the highest accuracy among all the compared state-of-the-art SLAM systems. Furthermore, Voxel-LIVO demonstrates excellent robustness in highly challenging scenarios, particularly when LiDAR and/or camera measurements are degraded.