基于残差融合双流图卷积网络的手势识别方法
DOI:
作者:
作者单位:

1.青岛科技大学 自动化与电子工程学院 山东 青岛 266061; 2.中国科学院 新疆理化技术研究所 新疆 乌鲁木齐 830011

作者简介:

通讯作者:

中图分类号:

TP391.9

基金项目:

国家海洋局重大专项项目 (国海科字[2016]494号No.30)


Gesture recognition method based on residual fusion dual-stream graph convolutional network
Author:
Affiliation:

1.College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao; 266061, China; 2. Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi, 830011, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对传统图卷积网络易忽略空间特征与时间特征之间关联的问题,设计了一种基于残差结构和图卷积网络相融合的双流网络模型。首先网络包括空间流和时间流两个通道,将手势骨骼数据构建成空间图和时序图作为两通道的输入,通过分离时间维度和空间维度极大地提高了训练速度。然后为了增加网络深度,避免梯度消失等问题,嵌入残差结构并对其进行改进,更加有效利用时间特征,保证了特征的多样性。最后将两通道输出的空间点集序列和时间边集序列串联转化,输入Softmax分类器进行分类,得到识别结果。将新提出的方法在CSL和DEVISIGN-L手势数据集上进行实验,结果表明在两个数据集上识别精度分别达到了96.2%和69.3%,证明该方法具有一定的先进性。

    Abstract:

    Aiming at the problem that the traditional graph convolutional network ignores the relationship between spatial and temporal features, a dual-stream network model based on the combination of residual structure and graph convolutional network is proposed. First of all, the network includes two channels of space flow and time flow. The gesture skeleton information is constructed into a space diagram and a time sequence diagram as the input of the two channels. The training speed is greatly improved by separating the time dimension and the space dimension. Then, in order to increase the depth of the network and avoid problems such as the disappearance of gradients, the residual structure is embedded and improved to make more effective use of time features and ensure the diversity of features. Finally, the spatial point set sequence and the time edge set sequence output by the two channels are converted in series and input into the Softmax classifier for classification, and the recognition result is obtained. The newly proposed method is tested on the CSL and DEVISIGN-L gesture datasets, and the results show that the recognition accuracy on the two datasets reaches 96.2% and 69.3%, which proves that the method has a certain degree of advancement.

    参考文献
    相似文献
    引证文献
引用本文

程换新,成凯,程力,蒋泽芹.基于残差融合双流图卷积网络的手势识别方法[J].电子测量技术,2022,45(9):20-24

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-05-08
  • 出版日期: