Laser & Optoelectronics Progress, Volume. 60, Issue 6, 0615005(2023)

Robot Dynamic Object Positioning and Grasping Method based on Two Stages

Yuebo Meng1、*, Qi Huang1, Jiuqiang Han2, Shengjun Xu1, and Zhou Wang1
Author Affiliations
  • 1College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, Shaanxi , China
  • 2College of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an 710049, Shaanxi , China
  • show less
    Figures & Tables(15)
    Multi-scale context-aware single-channel fusion network structure
    Multi-scale perception layer
    Context embedding layer structure
    Bilateral guidance feature fusion module structure
    Feature-assisted convergence module structure
    Experimental results of object pose estimation. (a) Original images; (b) pose estimation result images
    Pipeline experiment platform
    Robot object grabbing diagrams. (a) Overall pictures; (b) partial pictures
    • Table 1. First seven-layer convolution structure of improved VGG

      View table

      Table 1. First seven-layer convolution structure of improved VGG

      Network layerInput sizeKernel sizeOutput size
      Conv1+BN+ReLU3×640×6403×364×640×640
      Conv2+BN+ReLU64×640×6403×364×640×640
      MaxPooling64×640×6402×264×320×320
      Conv3+BN+ReLU64×320×3203×3128×320×320
      Conv4+BN+ReLU128×320×3203×3128×320×320
      MaxPooling128×320×3202×2128×160×160
      Conv5+BN+ReLU128×160×1603×3128×160×160
      Conv6+BN+ReLU128×160×1603×3128×160×160
      Conv7+BN+ReLU128×160×1603×3128×160×160
      MaxPooling128×160×1602×2128×80×80
    • Table 2. Comparison of model parameters, model prediction time, and segmentation accuracy

      View table

      Table 2. Comparison of model parameters, model prediction time, and segmentation accuracy

      MethodSize /MBAverage running time of test image /smIoU /%
      UNet1894.972.2395.6
      PSPNet199.310.3683.5
      BiSeNet2052.520.6891.5
      ICNet2231.520.9695.3
      Fast-SCNN234.600.3288.6
      BiSeNetV22120.470.5195.1
      Proposed method19.080.4196.8
    • Table 3. Accuracy and prediction speed comparison of multi-scale perception layers

      View table

      Table 3. Accuracy and prediction speed comparison of multi-scale perception layers

      MethodmIoU /%Average running time of test image /s
      Proposed method(one MSPL)91.50.32
      Proposed method(two MSPL)95.10.36
      Proposed method(three MSPL)96.80.41
      Proposed method(four MSPL)97.20.48
    • Table 4. Pose data table of item grabbing point

      View table

      Table 4. Pose data table of item grabbing point

      ThingPosturePredict time /s
      CoordinateAngle /(°)
      Gel pen(545,433)120.230.015
      (526,517)58.34
      Paper knife(422,485)124.550.014
      (463,491)-29.07
      Remote control(368,432)128.560.015
      (538,331)31.20
      Scissors(407,420)122.960.016
      (544,399)-34.17
      Screw(544,379)127.370.014
      (654,522)59.88
      Nut(695,217)82.240.016
      (537,318)84.25
    • Table 5. First group of grasping test results

      View table

      Table 5. First group of grasping test results

      ThingNumber of experimentsNumber of successful crawlsSuccess rate /%
      Remote control302996.7
      Paper knife302790
      Scissors302893.3
      Screw302480
      Nut302686.7
      Gel pen302893.3
    • Table 6. Second group of grasping test results

      View table

      Table 6. Second group of grasping test results

      ThingNumber of experimentsNumber of successful crawlsSuccess rate /%
      Remote control302996.7
      Paper knife302893.3
      Scissors302893.3
      Screw302686.7
      Nut302686.7
      Gel pen302893.3
    • Table 7. Third group of grasping test results

      View table

      Table 7. Third group of grasping test results

      ThingNumber of experimentsNumber of successful crawlsSuccess rate /%
      Remote control3030100
      Paper knife302893.3
      Scissors302893.3
      Screw302893.3
      Nut302896.7
      Gel pen302996.7
    Tools

    Get Citation

    Copy Citation Text

    Yuebo Meng, Qi Huang, Jiuqiang Han, Shengjun Xu, Zhou Wang. Robot Dynamic Object Positioning and Grasping Method based on Two Stages[J]. Laser & Optoelectronics Progress, 2023, 60(6): 0615005

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Dec. 27, 2021

    Accepted: Jan. 27, 2022

    Published Online: Mar. 16, 2023

    The Author Email: Meng Yuebo (mengyuebo@163.com)

    DOI:10.3788/LOP213364

    Topics