摘要
针对目前骨架行为识别方法忽视骨架关节点多尺度依赖关系和无法合理利用卷积核进行时间建模的问题,该文提出了一种可选择多尺度图卷积网络(SMS-GCN)的行为识别模型。首先,介绍了人体骨架图的构建原理和通道拓扑细化图卷积网络的结构;其次,构建成对关节邻接矩阵和多关节邻接矩阵以生成多尺度通道拓扑细化邻接矩阵,并引入图卷积网络,进一步提出多尺度图卷积(MS-GC)模块,以期实现对骨架关节点的多尺度依赖关系的建模;然后,基于多尺度时序卷积和可选择大核网络,提出可选择多尺度时序卷积(SMS-TC)模块,以期实现对有用的时间上下文特征的充分提取,同时结合MS-GC和SMS-TC模块,进而提出可选择多尺度图卷积网络模型并在多支流数据输入下进行训练;最后,在NTU-RGB+D和NTU-RGB+D 120数据集上进行大量实验,实验结果表明,该模型能够捕获更多的关节特征和学习有用的时间信息,具有优异的准确率和泛化能力。
Objective Human action recognition plays a key role in computer vision and has gained significant attention due to its broad range of applications.Skeleton data,derived from human action samples,is particularly robust to variations in camera viewpoint,illumination,and background occlusion,offering advantages over depth image and video data.Recent advancements in skeleton-based action recognition using Graph Convolutional Networks(GCNs)have demonstrated effective extraction of the topological relationships within skeleton data.However,limitations remain in some current approaches employing GCNs:(1)Many methods focus on the discriminative dependencies between pairs of joints,failing to effectively capture the multi-scale discriminative dependencies across the entire skeleton.(2)Some temporal modeling methods use dilated convolutions for simple feature fusion,but do not employ convolutional kernels in a manner suitable for effective temporal modeling.To address these challenges,a selective multi-scale GCN is proposed for action recognition,designed to capture more joint features and learn valuable temporal information.Methods The proposed model consists of two key modules:a multi-scale graph convolution module and a selective multi-scale temporal convolution module.First,the multi-scale graph convolution module serves as the primary spatial modeling component.It generates a multi-scale,channel-wise topology refinement adjacency matrix to enhance the model's ability to learn multi-scale discriminative dependencies of skeleton joints,thereby capturing more joint features.Specifically,the pairwise joint adjacency matrix is used to capture the interactive relationships between joint pairs,enabling the extraction of local motion details.Additionally,the multi-joint adjacency matrix emphasizes the overall action feature changes,improving the model's spatial representation of the skeleton data.Second,the selective multi-scale temporal convolution module is designed to capture valuable temporal contextual information.This module comprises three stages:feature extraction,temporal selection,and feature fusion.In the feature extraction stage,convolution and max-pooling operations are applied to obtain temporal features at different scales.Once the multi-scale temporal features are extracted,the temporal selection stage uses global max and average pooling to select salient features while preserving key details.This results in the generation of temporal selection masks without directly fusing temporal features across scales,thus reducing redundancy.In the feature fusion stage,the output temporal feature is obtained by weighted fusion of the temporal features and the selection masks.Finally,by combining the multi-scale graph convolution module with the selective multi-scale temporal convolution module,the proposed model extracts multi-stream data from skeleton data,generating various prediction scores.These scores are then fused through weighted summation to produce the final prediction outcome.Results and Discussions Extensive experiments are conducted on two large-scale datasets:NTU-RGB+D and NTU-RGB+D 120,demonstrating the effectiveness and strong generalization performance of the proposed model.When the convolution kernel size in the multi-scale graph convolution module is set to 3,the model performs optimally,capturing more representative joint features(Table 1).The results(Table 4)show that the temporal selection stage is critical within the selective multi-scale temporal convolution module,significantly enhancing the model's ability to extract temporal contextual information.Additionally,ablation studies(Table 5)confirm the effectiveness of each component in the proposed model,highlighting their contributions to improving recognition performance.The results(Tables 6 and 7)demonstrate that the proposed model outperforms state-of-the-art methods,achieving superior recognition accuracy and strong generalization capabilities.Conclusions This study presents a selective multi-scale GCN model for skeleton-based action recognition.The multi-scale graph convolution module effectively captures the multi-scale discriminative dependencies of skeleton joints,enabling the model to fully extract more joint features.By selecting appropriate temporal convolution kernels,the selective multi-scale temporal convolution module extracts and fuses temporal contextual information,thereby emphasizing useful temporal features.Experimental results on the NTU-RGB+D and NTU-RGB+D 120 datasets demonstrate that the proposed model achieves excellent accuracy and robust generalization performance.
作者
曹毅
李杰
叶培涛
王彦雯
吕贤海
CAO Yi;LI Jie;YE Peitao;WANG Yanwen;LüXianhai(School of Mechanical Engineering,Jiangnan University,Wuxi 214122,China;Jiangsu Key Laboratory of Advanced Food Manufacturing Equipment and Technology,Jiangnan University,Wuxi 214122,China)
出处
《电子与信息学报》
北大核心
2025年第3期839-849,共11页
Journal of Electronics & Information Technology
基金
国家自然科学基金(51375209)
江苏省“六大人才高峰”计划(ZBZZ-012)
高等学校学科创新引智计划(B18027)。
关键词
骨架行为识别
图卷积网络
多尺度通道拓扑细化邻接矩阵
可选择多尺度时序卷积
可选择多尺度图卷积网络
Skeleton-based action recognition
Graph Convolutional Network(GCN)
Multi-scale channel-wise topology refinement adjacency matrix
Selective multi-scale temporal convolution
Selective multi-scale graph convolutional network
作者简介
通信作者:曹毅:男,教授,博士,研究方向为机器人机构学、深度学习,caoyi@jiangnan.edu.cn;李杰:男,硕士生,研究方向为深度学习、行为识别;叶培涛:男,硕士生,研究方向为机器人控制系统、路径规划;王彦雯:男,硕士生,研究方向为深度学习、声纹识别;吕贤海:男,硕士生,研究方向为机器人机构学、行为识别.