Computer Applications and Software, Volume. 42, Issue 4, 279(2025)

FAST TRAINING METHOD OF DEEP REINFORCEMENT LEARNING DIMENSIONALITY REDUCTION FOR MECHANICAL ARM

Wang Min1, Wang Zan1, Li Shen2, Chen Lijia1, Fan Xianbojun1, Wang Chenlu1, and Liu Mingguo1
Author Affiliations
  • 1School of Physics and Electronics, Henan University, Kaifeng 475000, Henan, China
  • 2Kaifeng Pingmei New Carbon Material Technology Co., Ltd., Kaifeng 475000, Henan, China
  • show less

    Aimed at the problem that the training cycle of the deep reinforcement learning algorithm is too long when it performs full degree of freedom training for manipulator in 3D environment, a fast training method of deep reinforcement learning for manipulator is proposed. By decomposing the grasping task, the training of the lateral steering gear and the longitudinal steering gear of the manipulator was decoupled, and the solution space was compressed by dimensionality reduction, which simplified the training process while ensuring the execution accuracy of the action. The deep deterministic policy gradient (DDPG) algorithm was improved, and the secondary value estimation was performed on the same batch of samples to delay the updating of the strategy network, supplemented by preferential experience replay, which effectively improves the training efficiency of DDPG algorithm. Experimental results show that the proposed method has the characteristics of low training complexity, high speed and low cost, and the success rate of grasping can reach 98%, which is beneficial to the application and promotion of industrial occasions.

    Tools

    Get Citation

    Copy Citation Text

    Wang Min, Wang Zan, Li Shen, Chen Lijia, Fan Xianbojun, Wang Chenlu, Liu Mingguo. FAST TRAINING METHOD OF DEEP REINFORCEMENT LEARNING DIMENSIONALITY REDUCTION FOR MECHANICAL ARM[J]. Computer Applications and Software, 2025, 42(4): 279

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Jan. 18, 2022

    Accepted: Aug. 25, 2025

    Published Online: Aug. 25, 2025

    The Author Email:

    DOI:10.3969/j.issn.1000-386x.2025.04.040

    Topics