University of Oulu

W. Chen, Y. Liu, N. Pu, W. Wang, L. Liu and M. S. Lew, "Feature Estimations Based Correlation Distillation for Incremental Image Retrieval," in IEEE Transactions on Multimedia, vol. 24, pp. 1844-1856, 2022, doi: 10.1109/TMM.2021.3073279.

Feature estimations based correlation distillation for incremental image retrieval

Saved in:
Author: Chen, Wei1; Liu, Yu2; Pu, Nan1;
Organizations: 1Leiden Institute of Advanced Computer Science, Leiden University, Leiden, The Netherlands
2DUT-RU International School of Information Science and Engineering, Dalian University of Technology, Dalian, China
3College of Systems Engineering, NUDT, Changsha, China
4Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 5.3 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2023041135888
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2023-04-11
Description:

Abstract

Deep learning for fine-grained image retrieval in an incremental context is less investigated. In this paper, we explore this task to realize the model’s continuous retrieval ability. That means, the model enables to perform well on new incoming data and reduce forgetting of the knowledge learned on preceding old tasks. For this purpose, we distill semantic correlations knowledge among the representations extracted from the new data only so as to regularize the parameters updates using the teacher-student framework. In particular, for the case of learning multiple tasks sequentially, aside from the correlations distilled from the penultimate model, we estimate the representations for all prior models and further their semantic correlations by using the representations extracted from the new data. To this end, the estimated correlations are used as an additional regularization and further prevent catastrophic forgetting over all previous tasks, and it is unnecessary to save the stream of models trained on these tasks. Extensive experiments demonstrate that the proposed method performs favorably for retaining performance on the already-trained old tasks and achieving good accuracy on the current task when new data are added at once or sequentially.

see all

Series: IEEE transactions on multimedia
ISSN: 1520-9210
ISSN-E: 1941-0077
ISSN-L: 1520-9210
Volume: 24
Pages: 1844 - 1856
DOI: 10.1109/tmm.2021.3073279
OADOI: https://oadoi.org/10.1109/tmm.2021.3073279
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Copyright information: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.