Acta Optica Sinica, Volume. 43, Issue 21, 2112003(2023)

Visual-Inertial Adaptive Fusion Algorithm Based on Measurement Uncertainty

Xinxin Huang, Yongjie Ren*, Keyao Ma, and Zhiyuan Niu
Author Affiliations
  • State Key Laboratory of Precision Measurement Technology and Instrument, Tianjin University, Tianjin 300072, China
  • show less

    Objective

    In recent years, with the development of computer vision, image processing, data fusion, and other technologies, visual measurement has been widely applied in various fields of modern industry. The inertial measurement unit (IMU) has fast response speed, good dynamic performance, and high short-term accuracy, which can well improve the robustness of visual positioning in complex industrial environments represented by large-scale equipment manufacturing sites such as shipbuilding and aerospace. However, traditional filtering-based visual-inertial fusion algorithms maintain the fusion weights of visual and inertial information unchanged. When visual observation conditions are poor, traditional algorithms will greatly reduce the accuracy of visual-inertial positioning. Therefore, in order to solve the problems of low accuracy, poor adaptability, and low robustness caused by poor visual observation conditions in complex industrial environments, we propose a visual-inertial adaptive fusion algorithm based on measurement uncertainty. To address the situation of poor visual observation conditions, we dynamically adjust the data fusion weights of the visual sensor and inertial sensor by analyzing the measurement uncertainty of visual positioning. It can greatly improve measurement accuracy while enhancing measurement robustness.

    Methods

    In order to complete the real-time assembly and positioning tasks of large-scale and complex equipment such as spacecraft and ship hulls, we use a wearable helmet as the carrier, combined with immersive measurement technology, and calibrate the visual-inertial system by using a three-axis precision turntable. Loosely coupled filtering is used to fuse visual and inertial information, and real-time global pose estimation of the surveyor is obtained. In this paper, the measurement uncertainty of visual positioning based on the implicit function model is analyzed. The global control point position error and the image point extraction error are taken as the input of the uncertainty propagation model, and the measurement uncertainty of visual positioning is obtained as the output. Then, the error state extended Kalman filter (ESKF) is used to achieve visual-inertial fusion localization. Updating the state of ESKF relies on the covariance matrix of observation information, which directly affects the accuracy of ESKF. We also use cameras to provide observation information, but the visual positioning results are often greatly affected by the measurement environment. When the observation condition is poor, the accuracy of visual positioning decreases, and the observation confidence in ESKF does not match the measurement uncertainty of visual positioning, resulting in the inability of ESKF to achieve optimal estimation. In order to adapt to different visual observation conditions, we establish an adaptive filtering fusion positioning model. The observation noise covariance matrix in the ESKF model is represented by the measurement uncertainty of visual positioning, so the fusion weights of visual and inertial information in the ESKF model are adaptively adjusted. When the measurement uncertainty of visual positioning is small, which means that visual positioning is accurate, the Kalman gain is large, increasing the influence of camera observation on ESKF results. When the measurement uncertainty of visual positioning is large, which means that visual positioning is inaccurate, the Kalman gain is small, reducing the influence of camera observation on ESKF results.

    Results and Discussions

    We use a T-mac pose measurement system of the laser tracker and a precision three-axis turntable to experimentally verify the positioning accuracy of the proposed fusion positioning algorithm (Fig. 3). In the process of system movement, the visual positioning uncertainty is solved according to the implicit function model (Fig. 4), and it is substituted for the covariance matrix of observation information of ESKF model to obtain the results of the proposed method. In the actual measurement, the relative pose between T-mac and helmet measurement system remains fixed, but due to measurement errors, the results obtained are not fixed. The standard deviation is used to measure the dispersion degree of relative pose to evaluate the pose measurement accuracy of the helmet measurement system. Compared with the results obtained by pure visual positioning and traditional ESKF (Fig. 6), when the measurement uncertainty of visual positioning is small, and the visual observation condition of the proposed method is good (Table 2), the standard deviation of each axis angle is less than 0.04°, and the standard deviation of each axis position is less than 2 mm. All three methods can get good positioning results. When the measurement uncertainty of visual positioning is large, and the visual observation condition is poor (Table 3), the positioning results of pure visual positioning and traditional ESKF have a significant deviation. In addition, by using the proposed method, the standard deviation of each axis angle is less than 0.2°, and the standard deviation of each axis position is less than 7 mm. Compared with the traditional ESKF method, the standard deviation of the three-axis angle of the proposed method decreases by 46.4% and 28.7% except for the X-axis, and the standard deviation of the three-axis position decreases by 66.4%, 60.4%, and 43.7%

    Conclusions

    The industrial environment is complex, so it is difficult to ensure that visual observation is always in good condition. Pure visual positioning and traditional ESKF methods require good visual observation conditions to obtain accurate pose estimation. The visual-inertial adaptive fusion algorithm based on measurement uncertainty proposed in this paper can provide better pose fusion results than pure visual positioning and traditional ESKF methods under poor visual observation conditions. The proposed method adjusts the weight of camera observation information in a timely manner, better adapts to different observation conditions, enhances the positioning robustness of the system, and improves the accuracy of filtering-based visual inertial positioning measurement by solving the measurement uncertainty of visual positioning, so it meets the needs of visual-inertial positioning in complex industrial environments.

    Tools

    Get Citation

    Copy Citation Text

    Xinxin Huang, Yongjie Ren, Keyao Ma, Zhiyuan Niu. Visual-Inertial Adaptive Fusion Algorithm Based on Measurement Uncertainty[J]. Acta Optica Sinica, 2023, 43(21): 2112003

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Instrumentation, Measurement and Metrology

    Received: Apr. 20, 2023

    Accepted: Jun. 8, 2023

    Published Online: Nov. 8, 2023

    The Author Email: Ren Yongjie (yongjieren@tju.edu.cn)

    DOI:10.3788/AOS230851

    Topics