Laser & Optoelectronics Progress, Volume. 62, Issue 4, 0412006(2025)
Welding-Stud Detection Method Based on Depth Perception and Multi-Scale Feature Fusion
In automotive production, there is a significant issue of missed and false detections owing to the similarity in color between welding studs and stamped parts in stud detection. This study proposes a welding-stud detection method that combines deep perception with multi-scale feature fusion. First, the efficient multi-scale attention (EMA) module is integrated into the FasterBlock to construct the FasterEMA module, which enhances the spatial feature extraction capability using PConv and EMA. Second, a cascaded group attention mechanism is embedded in the transformer encoding layer to strengthen the network's perception of detailed features in deep layers and mitigate the interference of complex background information, thereby improving the accuracy of stud localization. Furthermore, the SN-CCFM module is constructed using GSConv from SlimNeck and VoV-GSCSP, replacing the standard convolution and RepBlock modules in the CCFM module to enhance the interaction between shallow and deep feature information, achieve multi-scale feature fusion, and improve detection accuracy. Finally, the detection network is combined with depth information obtained from RGB-D cameras to accurately determine the actual position of the studs. The experimental results demonstrate a recall of 86.6%, mean average precision of 88.2%, and detection speed of 45.3 frame/s, satisfactorily meeting industrial production requirements.
Get Citation
Copy Citation Text
Kaiqi Huang, Chenkang Jin. Welding-Stud Detection Method Based on Depth Perception and Multi-Scale Feature Fusion[J]. Laser & Optoelectronics Progress, 2025, 62(4): 0412006
Category: Instrumentation, Measurement and Metrology
Received: Jun. 4, 2024
Accepted: Jul. 5, 2024
Published Online: Feb. 18, 2025
The Author Email:
CSTR:32186.14.LOP241420