Acta Optica Sinica, Volume. 44, Issue 24, 2428005(2024)

Hierarchical Motion Estimation of Spatially Destabilized Targets under Gaussian Mixture Models

Zhiqiang Zhou, Riming Sun*, Chenglong Guo, and Yilong Zhu
Author Affiliations
  • School of Science, Dalian Jiaotong University, Dalian 116028, Liangning, China
  • show less

    Objective

    High-precision attitude measurement and motion estimation of non-cooperative targets in space are critical for various on-orbit service missions, including tracking, docking, rendezvous, and debris removal. Compared with other non-contact methods, line-array LiDAR offers advantages such as high imaging resolution and a large field of view, making it an ideal tool for precise space target measurement. However, due to the imaging mechanism of line-array systems, which only capture one line information per scan, the dynamic imaging of moving targets results in intra-frame motion discrepancies caused by the relative motion between the target and the measurement system. Furthermore, environmental factors like lighting introduce noise, degrading the quality of point cloud data and complicating high-precision motion estimation for spatially non-cooperative targets. To address these challenges, we propose a hierarchical motion estimation method for spatially destabilized targets based on the expectation-maximization Gaussian mixture model (EM-GMM). This method is high-precision, stable, and robust, and it effectively overcomes the degradation of motion estimation accuracy caused by intra-frame motion discrepancies and measurement noise under a linear measurement system.

    Methods

    In this paper, we apply the EM-GMM framework to estimate the motion of spatially destabilized targets using point cloud data collected by a linear measurement system. A Gaussian mixture model (GMM) is introduced, establishing two layers of the expectation-maximization (EM) algorithm. In the first layer, the GMM’s center of mass is aligned to approximate the noiseless points by treating these noiseless points as hidden variables. The time continuity of the point cloud sequence is leveraged to correct the intra-frame motion discrepancies using a column-wise benchmark mapping method, which aligns the point cloud data across frames. By continuously refining the motion parameters, the first EM layer provides a coarse estimation. The second EM layer refines this by constructing noise reduction weights based on a combination of the hyperbolic tangent function and posteriori probabilities, creating virtual points that replace the noisy original measurements, thus enhancing robustness against noise.

    Results and Discussions

    Experiments are conducted using spatially destabilized targets under varying motion states and noise conditions, employing line-array LiDAR parameters (Table 1). The proposed method achieves high-precision motion estimation when initialized with 15 frames of input point cloud data (Fig. 2). The first EM layer successfully prevents the algorithm from converging on local optima. The noise reduction weights applied in the second EM layer significantly improve estimation accuracy (Table 3), with the average error reduced by 52.35% and 35.68%, and standard deviation reduced by 57.71% and 54.54% across 252 motion states compared to the first and second layers (Table 2). Finally, the performance is compared to three existing algorithms under various motion states and noise intensities. The experimental results demonstrate that the proposed algorithm effectively overcomes intra-frame motion discrepancies compared to other methods. The estimation accuracy remains stable across different angular velocities (Fig. 5). The average errors are reduced by 71.64%, 66.95%, and 53.61% at noise intensities of 0.5%?1.5%, yielding more accurate motion estimation with greater robustness to noise (Fig. 6). The noise correction is both more precise and robust (Fig. 6), with the algorithm maintaining higher accuracy even in cases of greater noise overlap (Fig. 7).

    Conclusions

    In this paper, we address the challenges posed by intra-frame motion discrepancies and noise in motion estimation for spatially destabilized targets under a linear measurement system by framing motion estimation as a probability density problem. We introduce a Gaussian mixture model and establish a hierarchical motion estimation method that incorporates column-wise benchmark mapping for spatially destabilized targets. In addition, we employ virtual points in place of the original measurement points to mitigate the effect of noise on motion estimation. Experimental results demonstrate that the proposed method outperforms traditional approaches in handling complex scenarios with intra-frame motion discrepancies and noise interference, delivering more accurate estimation results even under pronounced target movement and noisy point cloud sequences.

    Keywords
    Tools

    Get Citation

    Copy Citation Text

    Zhiqiang Zhou, Riming Sun, Chenglong Guo, Yilong Zhu. Hierarchical Motion Estimation of Spatially Destabilized Targets under Gaussian Mixture Models[J]. Acta Optica Sinica, 2024, 44(24): 2428005

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Remote Sensing and Sensors

    Received: Mar. 14, 2024

    Accepted: May. 13, 2024

    Published Online: Dec. 13, 2024

    The Author Email: Sun Riming (sunriming78@126.com)

    DOI:10.3788/AOS240731

    Topics