Laser & Optoelectronics Progress, Volume. 62, Issue 12, 1215011(2025)
Self-Supervised Learning-Based Camera-Radar Odometry Fusion Localization
A camera-radar odometry fusion localization algorithm based on self-supervised learning is proposed to address the limitations of radar positioning accuracy and the degraded performance of visual odometry in rainy weather. This algorithm leverages the rich information from camera data while benefiting from resilience of a radar to adverse weather conditions. First, a dilated convolutional network is incorporated into deep learning-based visual odometry to enhance the local features of the image. A self-attention mechanism is then employed to extract global contextual information from the image to improve depth prediction performance. Second, for radar odometry data, view reconstruction technology is used as a self-supervised signal to train the network for artifact removal, thereby ensuring the authenticity of radar data. To address discrepancies in sampling frequencies and timestamp misalignment between camera and radar data, time soft alignment is performed by matching the timestamps of adjacent radar point cloud maps with the closest camera image pair. To achieve precise alignment between visual and radar odometry, a Smooth L1 loss function is designed to constrain their pose estimations, thereby ensuring effective fusion of visual and radar data. Experiments conducted on the Oxford Radar RobotCar dataset demonstrate that, compared with the baseline algorithm, the proposed algorithm considerably improves pose estimation accuracy in daytime and rainy conditions and validates its effectiveness.
Get Citation
Copy Citation Text
Hanwen Zhang, Yanyang Wang. Self-Supervised Learning-Based Camera-Radar Odometry Fusion Localization[J]. Laser & Optoelectronics Progress, 2025, 62(12): 1215011
Category: Machine Vision
Received: Sep. 3, 2024
Accepted: Jan. 7, 2025
Published Online: Jun. 25, 2025
The Author Email: Yanyang Wang (yywang@mail.xhu.edu.cn)
CSTR:32186.14.LOP241942