University of Oulu

Y. Cui et al., "Uncertainty-Guided Semi-Supervised Few-Shot Class-Incremental Learning With Knowledge Distillation," in IEEE Transactions on Multimedia, vol. 25, pp. 6422-6435, 2023, doi: 10.1109/TMM.2022.3208743

Uncertainty-guided semi-supervised few-shot class-incremental learning with knowledge distillation

Saved in:
Author: Cui, Yawen1; Deng, Wanxia2; Xu, Xin3;
Organizations: 1CMVS, University of Oulu, Oulu, Finland
2School of Meteorology and Oceanography, NUDT, Changsha, Hunan, China
3College of Intelligent Science, NUDT, Changsha, Hunan, China
4College of Electronic Science, NUDT, Changsha, Hunan, China
5Laboratory for Big Data and decision, the College of System Engineering, National University of Defense Technology (NUDT), Changsha, Hunan, China
6Center for Machine Vision and Signal analysis (CMVS), University of Oulu, Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 2 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2023040334591
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2023-04-03
Description:

Abstract

Class-Incremental Learning (CIL) aims at incrementally learning novel classes without forgetting old ones. This capability becomes more challenging when novel tasks contain one or a few labeled training samples, which leads to a more practical learning scenario, i.e., Few-Shot Class-Incremental Learning (FSCIL). The dilemma on FSCIL lies in serious overfitting and exacerbated catastrophic forgetting caused by the limited training data from novel classes. In this paper, excited by the easy accessibility of unlabeled data, we conduct a pioneering work and focus on a Semi-Supervised Few-Shot Class-Incremental Learning (Semi-FSCIL) problem, which requires the model incrementally to learn new classes from extremely limited labeled samples and a large number of unlabeled samples. To address this problem, a simple but efficient framework is first constructed based on the knowledge distillation technique to alleviate catastrophic forgetting. To efficiently mitigate the overfitting problem on novel categories with unlabeled data, uncertainty-guided semi-supervised learning is incorporated into this framework to select unlabeled samples into incremental learning sessions considering the model uncertainty. This process provides extra reliable supervision for the distillation process and contributes to better formulating the class means. Our extensive experiments on CIFAR100, miniImageNet and CUB200 datasets demonstrate the promising performance of our proposed method, and define baselines in this new research direction.

see all

Series: IEEE transactions on multimedia
ISSN: 1520-9210
ISSN-E: 1941-0077
ISSN-L: 1520-9210
Volume: 25
Pages: 6422 - 6435
DOI: 10.1109/tmm.2022.3208743
OADOI: https://oadoi.org/10.1109/tmm.2022.3208743
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was partially supported by National Key Research and Development Program of China No. 2021YFB3100800, the Academy of Finland under grant 331883, the National Natural Science Foundation of China under Grant 61872379, 62022091 and 62022091, and the China Scholarship Council (CSC) under grant 201903170129.
Academy of Finland Grant Number: 331883
Detailed Information: 331883 (Academy of Finland Funding decision)
Copyright information: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.