University of Oulu

Z. Yu et al., "Searching Multi-Rate and Multi-Modal Temporal Enhanced Networks for Gesture Recognition," in IEEE Transactions on Image Processing, vol. 30, pp. 5626-5640, 2021, doi: 10.1109/TIP.2021.3087348

Searching multi-rate and multi-modal temporal enhanced networks for gesture recognition

Saved in:
Author: Yu, Zitong1; Zhou, Benjia2; Wan, Jun3;
Organizations: 1Center for Machine Vision and Signal Analysis, University of Oulu, Oulu 90014, Finland
2Macau University of Science and Technology, Macau 999078, China
3National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
4DAMO Academy, Alibaba Group (U.S.) Inc., Bellevue, WA, 98004, USA
5Westlake University, Hangzhou 310012, China
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 8.1 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2021090144890
Language: English
Published: Institute of Electrical and Electronics Engineers, 2021
Publish Date: 2021-09-01
Description:

Abstract

Gesture recognition has attracted considerable attention owing to its great potential in applications. Although the great progress has been made recently in multi-modal learning methods, existing methods still lack effective integration to fully explore synergies among spatio-temporal modalities effectively for gesture recognition. The problems are partially due to the fact that the existing manually designed network architectures have low efficiency in the joint learning of multi-modalities. In this paper, we propose the first neural architecture search (NAS)-based method for RGB-D gesture recognition. The proposed method includes two key components: 1) enhanced temporal representation via the proposed 3D Central Difference Convolution (3D-CDC) family, which is able to capture rich temporal context via aggregating temporal difference information; and 2) optimized backbones for multi-sampling-rate branches and lateral connections among varied modalities. The resultant multi-modal multi-rate network provides a new perspective to understand the relationship between RGB and depth modalities and their temporal dynamics. Comprehensive experiments are performed on three benchmark datasets (IsoGD, NvGesture, and EgoGesture), demonstrating the state-of-the-art performance in both single- and multi-modality settings. The code is available at https://github.com/ZitongYu/3DCDC-NAS.

see all

Series: IEEE transactions on image processing
ISSN: 1057-7149
ISSN-E: 1941-0042
ISSN-L: 1057-7149
Volume: 30
Pages: 5626 - 5640
DOI: 10.1109/TIP.2021.3087348
OADOI: https://oadoi.org/10.1109/TIP.2021.3087348
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
NAS
Funding: The authors wish to acknowledge the CSC-IT Center for Science, Finland, for computational resources.
Copyright information: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.