W. Peng, J. Shi and G. Zhao, "Spatial Temporal Graph Deconvolutional Network for Skeleton-Based Human Action Recognition," in IEEE Signal Processing Letters, vol. 28, pp. 244-248, 2021, doi: 10.1109/LSP.2021.3049691
Spatial temporal graph deconvolutional network for skeleton-based human action recognition
|Author:||Peng, Wei1; Shi, Jingang2; Zhao, Guoying1|
1Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
2School of Software Engineering, Xi’an Jiaotong University, China
|Online Access:||PDF Full Text (PDF, 1 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2021042611784
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2021-04-26
Benefited from the powerful ability of spatial temporal Graph Convolutional Networks (ST-GCNs), skeleton-based human action recognition has gained promising success. However, the node interaction through message propagation does not always provide complementary information. Instead, it May even produce destructive noise and thus make learned representations indistinguishable. Inevitably, the graph representation would also become over-smoothing especially when multiple GCN layers are stacked. This paper proposes spatial-temporal graph deconvolutional networks (ST-GDNs), a novel and flexible graph deconvolution technique, to alleviate this issue. At its core, this method provides a better message aggregation by removing the embedding redundancy of the input graphs from either node-wise, frame-wise or element-wise at different network layers. Extensive experiments on three current most challenging benchmarks verify that ST-GDN consistently improves the performance and largely reduce the model size on these datasets.
IEEE signal processing letters
|Pages:||244 - 248|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
213 Electronic, automation and communications engineering, electronics
This work was supported in part by the ICT2023 Project under Grant 328115, in part by the Academy of Finland for Project MiGA under Grant 316765, in part by the Infotech Oulu, and also with the National Natural Science Foundation of China under Grant 62002283.
|Academy of Finland Grant Number:||
328115 (Academy of Finland Funding decision)
316765 (Academy of Finland Funding decision)
© The Authors 2021. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.