W. Deng, L. Zhao, G. Kuang, D. Hu, M. Pietikäinen and L. Liu, "Deep Ladder-Suppression Network for Unsupervised Domain Adaptation," in IEEE Transactions on Cybernetics, vol. 52, no. 10, pp. 10735-10749, Oct. 2022, doi: 10.1109/TCYB.2021.3065247
Deep ladder-suppression network for unsupervised domain adaptation
|Author:||Deng, Wanxia1; Zhao, Lingjun1; Kuang, Gangyao1;|
1State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, College of Electronic Science, National University of Defense Technology, Changsha 410073, China
2College of Intelligent Science, National University of Defense Technology, Changsha, China
3Center for Machine Vision and Signal analysis, University of Oulu, 90570 Oulu, Finland
4College of System Engineering, National University of Defense Technology, Changsha 410073, China
|Online Access:||PDF Full Text (PDF, 3.9 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2022100561187
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2022-10-05
Unsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. Most existing approaches learn domain-invariant features by adapting the entire information of the images. However, forcing adaptation of domain-specific variations undermines the effectiveness of the learned features. To address this problem, we propose a novel, yet elegant module, called the deep ladder-suppression network (DLSN), which is designed to better learn the cross-domain shared content by suppressing domain-specific variations. Our proposed DLSN is an autoencoder with lateral connections from the encoder to the decoder. By this design, the domain-specific details, which are only necessary for reconstructing the unlabeled target data, are directly fed to the decoder to complete the reconstruction task, relieving the pressure of learning domain-specific variations at the later layers of the shared encoder. As a result, DLSN allows the shared encoder to focus on learning cross-domain shared content and ignores the domain-specific variations. Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. Without whistles and bells, extensive experimental results on four gold-standard domain adaptation datasets, for example: 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks.
IEEE transactions on cybernetics
|Pages:||10735 - 10749|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
This work was supported in part by the National Natural Science Foundation of China under Grant 9186301 and Grant 61701508; in part by the Hunan Science and Technology Plan Project under Grant 2019GK2131; in part by the Hunan Provincial Natural Science Foundation of China under Grant 2018JJ3613; in part by Academy of Finland under Grant 331883; and in part by the National Natural Science Foundation of China under Grant 61872379.
|Academy of Finland Grant Number:||
331883 (Academy of Finland Funding decision)
© The Author(s) 2021. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.