Computer Engineering, Volume. 51, Issue 8, 107(2025)

Sign Language Recognition Using Data Gloves Based on EWBiLSTM-ATT

WU Donghui1、*, WANG Jinfeng1, QIU Sen2, and LIU Guozhi1
Author Affiliations
  • 1College of Building Environment Engineering, Zhengzhou University of Light Industry, Zhengzhou 450002, Henan, China
  • 2School of Control Science and Engineering, Dalian University of Technology, Dalian 116081, Liaoning, China
  • show less

    Sign language recognition has received widespread attention in recent years. However, existing sign language recognition models face challenges, such as long training times and high computational costs. To address this issue, this study proposes a hybrid deep learning method that integrates an attention mechanism with an Expanded Wide-kernel Deep Convolutional Neural Network (EWDCNN) and a Bidirectional Long Short-Term Memory (BiLSTM) network based on data obtained from a wearable data glove, EWBiLSTM-ATT model. First, by widening the first convolutional layer, the model parameter count is reduced, which enhances computational speed. Subsequently, by deepening the EWDCNN convolutional layers, the model's ability to automatically extract features from sign language is improved. Second, BiLSTM is introduced as a temporal model to capture the dynamic temporal information of sign language sequential data, effectively handling temporal relationships in the sensor data. Finally, the attention mechanism is employed to map the weighted sum and learn a parameter matrix that assigns different weights to the hidden states of BiLSTM, allowing the model to automatically select key time segments related to gesture actions by calculating the attention weights for each time step. This study uses the STM32F103 as the main control module and builds a data glove sign language acquisition platform with MPU6050 and Flex Sensor 4.5 sensors as the core components. Sixteen dynamic sign language actions are selected to construct the GR-Dataset data training model. Under the same experimental conditions, compared to the CLT-net, CNN-GRU, CLA-net, and CNN-GRU-ATT models, the recognition rate of the EWBiLSTM-ATT model is 99.40%, which is increased by 10.36, 8.41, 3.87, and 3.05 percentage points, respectively. Further, the total training time is reduced to 57%, 61%, 55%, and 56% of the comparison models, respectively.

    Tools

    Get Citation

    Copy Citation Text

    WU Donghui, WANG Jinfeng, QIU Sen, LIU Guozhi. Sign Language Recognition Using Data Gloves Based on EWBiLSTM-ATT[J]. Computer Engineering, 2025, 51(8): 107

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Aug. 5, 2024

    Accepted: Aug. 26, 2025

    Published Online: Aug. 26, 2025

    The Author Email: WU Donghui (w_donghui@163.com)

    DOI:10.19678/j.issn.1000-3428.0070202

    Topics