Laser & Optoelectronics Progress, Volume. 61, Issue 24, 2415008(2024)
Three-Dimensional Reconstruction Methods for Obstacles in Complex Parking Scenarios
Detecting irregular obstacles under complex scenarios of intelligent parking is a difficult task. Therefore, a method that employs a gridded structured light projection for the detection area is proposed in this study. Specifically, this method captures the deformation of structured light grids on obstacle surfaces, thereby enhancing the precision of obstacle feature collection. In addition, a method for generating depth maps via the training of an end-to-end network is introduced. Subsequently, the fusion of external contour features from red green blue (RGB) images with three-dimensional (3D) depth features from depth images is achieved, culminating in the proposition of a dual-feature parallel processing algorithm for RGB and depth imagery. A multi-scale feature fusion extraction model is designed, facilitating multifaceted feature extraction and in-depth fusion without escalating model complexity, which enables the transition of mesh models towards accurate 3D representations. Consequently, a multi-scale feature-informed, graph convolutional neural network-based end-to-end 3D reconstruction model is established. Experimental results in intelligent parking scenarios indicate that compared to foundational 3D reconstruction models, the model proposed herein achieves a mean reduction of 2% and 9% in chamfer distance and earth mover’s distance, respectively. Furthermore, relative to three mainstream 3D reconstruction models, the mean reduction in chamfer distance is 60%, 2%, and 78%, respectively, while the reduction in earth mover’s distance is 16%, 23%, and 91%, respectively.
Get Citation
Copy Citation Text
Shidian Ma, Yuxuan Huang, Haobin Jiang, Aoxue Li, Mu Han, Chenxu Li. Three-Dimensional Reconstruction Methods for Obstacles in Complex Parking Scenarios[J]. Laser & Optoelectronics Progress, 2024, 61(24): 2415008
Category: Machine Vision
Received: Apr. 3, 2024
Accepted: May. 22, 2024
Published Online: Dec. 10, 2024
The Author Email: Shidian Ma (masd@ujs.edu.cn)
CSTR:32186.14.LOP241025