Laser & Optoelectronics Progress, Volume. 60, Issue 6, 0615006(2023)
FastCrack: Real-Time Pavement Crack Segmentation
The use of highways can lead to various cracks on their surface, which can harm the structure. Thus, the research on efficient and accurate crack segmentation algorithms in transportation has attracted significant interest in recent times. Data-driven deep learning technology showed the best applicability among the existing image-based crack segmentation methods. However, crack segmentation models based on neural network generally lack attention to real-time performance. Therefore, this study designs a set of structure hyperparameter selection frameworks and proposes a real-time pavement crack segmentation model (FastCrack-SPOS) to balance the accuracy and speed of the model and to select the appropriate structure hyperparameters. First, we constructed 45 groups of different structural models with various widths (16, 32, 48, 64, 80); depths (D1, D2, D3); and down-sampling ratios (1/4, 1/8, 1/32) and analyzed the effects of each parameter on model performance. Then, we used the neural architecture search technology to search for suitable convolution blocks for each layer and constructed the model. Experimental results reveal that the proposed architecture hyperparameter selection method is highly effective for lightweight crack segmentation model design. Our FastCrack-SPOS has an intersection ratio of 62.88% in the pavement crack dataset, and the number of parameters is only 0.29×106, which is a reduction by 95% compared to existing models. For processing images with size of 1024×1024, the speed attained by the FastCrack-SPOS is 147 frames/s, thereby achieving a balance between speed and accuracy, leading to its high practical application value.
Get Citation
Copy Citation Text
Zhuang Yue, Xiaodong Chen, Yi Wang, Huaiyu Cai, Weixi Yan, Liying Hou. FastCrack: Real-Time Pavement Crack Segmentation[J]. Laser & Optoelectronics Progress, 2023, 60(6): 0615006
Category: Machine Vision
Received: Feb. 17, 2022
Accepted: Mar. 30, 2022
Published Online: Mar. 7, 2023
The Author Email: Chen Xiaodong (xdchen@tju.edu.cn)