Journal of Qingdao University(Engineering & Technology Edition), Volume. 40, Issue 2, 30(2025)
GraspNet-based Category-oriented Grasping Method for Object Planar Scenes
[1] [1] MOHAMMED M Q, CHUNG K L, CHYI C S. Review of deep reinforcement learning-based object grasping: Techniques, open challenges, and recommendations[J]. IEEE Access, 2020, 8: 178450-178481.
[2] [2] PRATTICHIZZO D, TRINKLE J C. Grasping [M]. Springer handbook of robotics. Berlin: Springer, 2008.
[3] [3] JIA P, LI W L, WANG G, et al. Optimal grasp planning for a dexterous robotic hand using the volume of a generalized force ellipsoid during accepted flattening[J]. International Journal of Advanced Robotic Systems, 2017, 14(1): 87134.
[4] [4] CHEN L, HUANG P, LI Y, et al. Edge-dependent efficient grasp rectangle search in robotic grasp detection[J]. IEEE/ASME Transactions on Mechatronics, 2021, 26(6): 2922-2931.
[5] [5] DEPIERRE A, DELLANDRA E, CHEN L. Scoring graspability based on grasp regression for better grasp prediction [C]∥IEEE International Conference on Robotics and Automation. Xian: IEEE, 2021: 4370-4376.
[6] [6] ZHU X P, WANG D, BIZA O, et al. Sample efficient grasp learning using equivariant models[C]∥Conference on Robotics Science and Systems. New York: Robotics-Science and Systems, 2022.
[9] [9] FANG H S, WANG C, GOU M, et al. Graspnet-1billion: A large-scale benchmark for general object grasping[C]∥IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 11441-11450.
[10] [10] CHEN S, TANG W, XIE P, et al. Efficient heatmap-guided 6-dof grasp detection in cluttered scenes[J]. IEEE Robotics and Automation Letters, 2023, 8(8): 4895-4902.
[11] [11] ZHAO B, ZHANG H, LAN X, et al. Regnet: Region-based grasp network for end-to-end grasp detection in point clouds[C]∥IEEE International Conference on Robotics and Automation. Xian: IEEE, 2021: 13474-13480.
[12] [12] WANG C, FANG H S, GOU M, et al. Graspness discovery in clutters for fast and accurate grasp detection[C]∥IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 15944-15953.
[13] [13] LIU Z X, CHEN Z B, XIE S J, et al. Transgrasp: A multi-scale hierarchical point transformer for 7-dof grasp detection[C]∥International Conference on Robotics and Automation. Philadelphia: IEEE, 2022: 1533-1539.
[14] [14] LIU X, ZHANG Y, CAO H, et al. Joint segmentation and grasp pose detection with multi-modal feature fusion network[C]∥IEEE International Conference on Robotics and Automation. London: IEEE, 2023: 1751-1756.
[15] [15] LIU Z, WANG Z, HUANG S, et al. Ge-grasp: Efficient target-oriented grasping in dense clutter[C]∥ IEEE/RSJ International Conference on Intelligent Robots and Systems. Kyoto: IEEE, 2022: 1388-1395.
[16] [16] SUNDERMEYER M, MOUSAVIAN A, TRIEBEL R, et al. Contact-graspnet: Efficient 6-dof grasp generation in cluttered scenes[C]∥IEEE International Conference on Robotics and Automation. Xian: IEEE, 2021: 13438-13444.
[19] [19] QI C R, YI L, SU H, et al. Pointnet++: Deep hierarchical feature learning on point sets in a metric space [C]∥ The 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017: 5105-5114.
[20] [20] WANG F, LIU H. Understanding the behaviour of contrastive loss[C]∥IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 2495 2504.
[23] [23] XIE P, CHEN S, TANG W, et al. Rethinking 6-dof grasp detection: A flexible framework for high-quality grasping [DB/OL]. [2025-02-16]. https://arxiv.org/abs/2403.15054.
[24] [24] MA H X, HUANG D. Towards scale balanced 6-dof grasp detection in cluttered scenes[C]∥6th Conference on robot learning. Atlanta: IEEE, 2022, 205: 2004-2013.
[25] [25] XIE P, CHEN S, HU D, et al. Target-oriented object grasping via multimodal human guidance[DB/OL]. [2025-02-16]. https://arxiv.org/abs/2408.11138
Get Citation
Copy Citation Text
SONG Shimiao, GU Feifan, GE Jiashang, YANG Jie. GraspNet-based Category-oriented Grasping Method for Object Planar Scenes[J]. Journal of Qingdao University(Engineering & Technology Edition), 2025, 40(2): 30
Received: Apr. 14, 2025
Accepted: Aug. 22, 2025
Published Online: Aug. 22, 2025
The Author Email: YANG Jie (yangjie@qdu.edu.cn)