Optics and Precision Engineering, Volume. 32, Issue 18, 2792(2024)
Occluded target grasping detection method based on spatial information aggregation
Aiming at the problem of low accuracy of occlusion target grasping position detection when the robot relies on vision grasping, we proposed an occlusion target grasping position detection method based on spatial information aggregation. Occlusion led to the change of the target's intrinsic features in the camera's field of view, which affected the target's positional information and shape-structural features. First, coordinate convolution was used instead of traditional convolution for feature extraction, and a new coordinate channel was added after the input feature map to improve the network's ability to perceive position information. Second, the spatial information aggregation module was designed, which adopted a parallel structure to increase the local sensing field and encoded the channels along the spatial direction to obtain multi-scale spatial information, and then aggregated the information through nonlinear fitting to make the model better understand the target structure and shape. Finally, the grasping position detection network outputted the grasping mass, angle and width, and calculated the optimal grasping position to establish the optimal grasping rectangular box. Validated on the Cornell Grasping dataset, the self-constructed occlusion dataset, and the Jacquard dataset, the detection accuracies reach 98.9%, 94.7%, and 96.0%, respectively, and the success rate is 93% in 100 real grasping experiments on the target in the experimental platform. The proposed method achieves the highest detection accuracy on all three datasets, and the detection effect is better in real scenes.
Get Citation
Copy Citation Text
Renxiang CHEN, Tianran QIU, Lixia YANG, Zhitong ZHANG, Liang XIA. Occluded target grasping detection method based on spatial information aggregation[J]. Optics and Precision Engineering, 2024, 32(18): 2792
Category:
Received: May. 16, 2024
Accepted: --
Published Online: Nov. 18, 2024
The Author Email: CHEN Renxiang (manlou.yue@126.com)