University of Oulu

B. Zhao, X. Sun, X. Hong, Y. Yao and Y. Wang, "Zero-Shot Learning Via Recurrent Knowledge Transfer," 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 2019, pp. 1308-1317. doi: 10.1109/WACV.2019.00144

Zero-shot learning via recurrent knowledge transfer

Saved in:
Author: Zhao, Bo1,2; Sun, Xinwei3; Hong, Xiaopeng4;
Organizations: 1Nat’l Engineering Laboratory for Video Technology, Cooperative Medianet Innovation Center, Computer Science Dept., Peking University
2Deepwise AI Lab
3School of Mathematical Science, Peking University
4Center for Machine Vision and Signal Analysis, University of Oulu
5Department of Mathematics, Hong Kong University of Science and Technology
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.8 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2019080523437
Language: English
Published: Institute of Electrical and Electronics Engineers, 2019
Publish Date: 2019-08-05
Description:

Abstract

Zero-shot learning (ZSL) which aims to learn new concepts without any labeled training data is a promising solution to large-scale concept learning. Recently, many works implement zero-shot learning by transferring structural knowledge from the semantic embedding space to the image feature space. However, we observe that such direct knowledge transfer may suffer from the space shift problem in the form of the inconsistency of geometric structures in the training and testing spaces. To alleviate this problem, we propose a novel method which actualizes recurrent knowledge transfer (RecKT) between the two spaces. Specifically, we unite the two spaces into the joint embedding space in which unseen image data are missing. The proposed method provides a synthesis-refinement mechanism to learn the shared subspace structure (SSS) and synthesize missing data simultaneously in the joint embedding space. The synthesized unseen image data are utilized to construct the classifier for unseen classes. Experimental results show that our method outperforms the state-of-the-art on three popular datasets. The ablation experiment and visualization of the learning process illustrate how our method can alleviate the space shift problem. By product, our method provides a perspective to interpret the ZSL performance by implementing subspace clustering on the learned SSS.

see all

Series: IEEE Winter Conference on Applications of Computer Vision
ISSN: 1550-5790
ISSN-E: 2472-6737
ISSN-L: 1550-5790
ISBN: 978-1-7281-1975-5
ISBN Print: 978-1-7281-1976-2
Pages: 1308 - 1317
Article number: 8658396
DOI: 10.1109/WACV.2019.00144
OADOI: https://oadoi.org/10.1109/WACV.2019.00144
Host publication: 19th IEEE Winter Conference on Applications of Computer Vision, WACV 2019
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported in part by NSFC-61527804, NSFC-61625201, NSFC-61650202 and NSFC-61572205. This work was also supported by the Academy of Finland and Infotech Oulu.
Copyright information: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.