University of Oulu

Z. Yu et al., "Searching Multi-Rate and Multi-Modal Temporal Enhanced Networks for Gesture Recognition," in IEEE Transactions on Image Processing, vol. 30, pp. 5626-5640, 2021, doi: 10.1109/TIP.2021.3087348

Searching multi-rate and multi-modal temporal enhanced networks for gesture recognition

Saved in:
Author: Yu, Zitong1; Zhou, Benjia2; Wan, Jun3;
Organizations: 1Center for Machine Vision and Signal Analysis, University of Oulu, Oulu 90014, Finland
2Macau University of Science and Technology, Macau 999078, China
3National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
4DAMO Academy, Alibaba Group (U.S.) Inc., Bellevue, WA, 98004, USA
5Westlake University, Hangzhou 310012, China
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 3.7 MB)
Persistent link:
Language: English
Published: Institute of Electrical and Electronics Engineers, 2021
Publish Date: 2021-09-01


Gesture recognition has attracted considerable attention owing to its great potential in applications. Although the great progress has been made recently in multi-modal learning methods, existing methods still lack effective integration to fully explore synergies among spatio-temporal modalities effectively for gesture recognition. The problems are partially due to the fact that the existing manually designed network architectures have low efficiency in the joint learning of multi-modalities. In this paper, we propose the first neural architecture search (NAS)-based method for RGB-D gesture recognition. The proposed method includes two key components: 1) enhanced temporal representation via the proposed 3D Central Difference Convolution (3D-CDC) family, which is able to capture rich temporal context via aggregating temporal difference information; and 2) optimized backbones for multi-sampling-rate branches and lateral connections among varied modalities. The resultant multi-modal multi-rate network provides a new perspective to understand the relationship between RGB and depth modalities and their temporal dynamics. Comprehensive experiments are performed on three benchmark datasets (IsoGD, NvGesture, and EgoGesture), demonstrating the state-of-the-art performance in both single- and multi-modality settings. The code is available at

see all

Series: IEEE transactions on image processing
ISSN: 1057-7149
ISSN-E: 1941-0042
ISSN-L: 1057-7149
Volume: 30
Pages: 5626 - 5640
DOI: 10.1109/TIP.2021.3087348
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Funding: This work was supported in part by the Academy of Finland for project MiGA under Grant 316765, in part by the ICT 2023 Project under Grant 328115, in part by the Infotech Oulu, in part by the Chinese National Natural Science Foundation under Project 61961160704 and Project 61876179, in part by the Science and Technology Development Fund of Macau under Grant 0010/2019/AFJ and Grant 0025/2019/AKP, and in part by the External Cooperation Key Project of Chinese Academy Sciences under Grant 173211KYSB20200002.
Academy of Finland Grant Number: 316765
Detailed Information: 316765 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
Copyright information: © 2021 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see