Wei Peng, Jingang Shi, Zhaoqiang Xia, and Guoying Zhao. 2020. Mix Dimension in Poincaré Geometry for 3D Skeleton-based Action Recognition. Proceedings of the 28th ACM International Conference on Multimedia. Association for Computing Machinery, New York, NY, USA, 1432–1440. DOI:https://doi.org/10.1145/3394171.3413910
Mix dimension in Poincaré geometry for 3D skeleton-based action recognition
|Author:||Peng, Wei1; Shi, Jingang2; Xia, Zhaoqiang3;|
1Center for Machine Vision and Signal Analysis, University of Oulu, Finland
2School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
3Northwestern Polytechnical University, Xi’an, China
|Online Access:||PDF Full Text (PDF, 0.5 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe202102175104
Association for Computing Machinery,
|Publish Date:|| 2021-02-17
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data, e.g., skeletal data in human action recognition, providing an exciting new way to fuse rich structural information for nodes residing in different parts of a graph. In human action recognition, current works introduce a dynamic graph generation mechanism to better capture the underlying semantic skeleton connections and thus improves the performance. In this paper, we provide an orthogonal way to explore the underlying connections. Instead of introducing an expensive dynamic graph generation paradigm, we build a more efficient GCN on a Riemann manifold, which we think is a more suitable space to model the graph data, to make the extracted representations fit the embedding matrix. Specifically, we present a novel spatial-temporal GCN (ST-GCN) architecture which is defined via the Poincaré geometry such that it is able to better model the latent anatomy of the structure data. To further explore the optimal projection dimension in the Riemann space, we mix different dimensions on the manifold and provide an efficient way to explore the dimension for each ST-GCN layer. With the final resulted architecture, we evaluate our method on two current largest scale 3D datasets, i.e., NTU RGB+D and NTU RGB+D 120. The comparison results show that the model could achieve a superior performance under any given evaluation metrics with only 40% model size when compared with the previous best GCN method, which proves the effectiveness of our model.
|Pages:||1432 - 1440|
MM '20: Proceedings of the 28th ACM International Conference on Multimedia
ACM International Conference on Multimedia
|Type of Publication:||
A4 Article in conference proceedings
|Field of Science:||
113 Computer and information sciences
This work was supported by ICT 2023 project (grant 328115), the Academy of Finland for project MiGA (grant 316765), and Infotech Oulu. As well, the authors wish to acknowledge CSC-IT Center for Science, Finland, for computational resources.
|Academy of Finland Grant Number:||
316765 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
© 2020 Association for Computing Machinery. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in MM '20: Proceedings of the 28th ACM International Conference on Multimedia, https://doi.org/10.1145/3394171.3413910.