Joint clustering and discriminative feature alignment for unsupervised domain adaptation |
|
Author: | Deng, Wanxia1; Liao, Qing2; Zhao, Lingjun1; |
Organizations: |
1College of Electronic Science, National University of Defense technology, Changsha, Hunan, China 2Department of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China 3College of System Engineering, National University of Defense Technology, Changsha, Hunan, China
4College of Intelligent Science, National University of Defense Technology, Changsha, Hunan, China
5Center for Machine Vision and Signal analysis, University of Oulu, Oulu, Finland |
Format: | article |
Version: | accepted version |
Access: | open |
Online Access: | PDF Full Text (PDF, 1.9 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2021101150549 |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers,
2021
|
Publish Date: | 2021-10-11 |
Description: |
AbstractUnsupervised Domain Adaptation (UDA) aims to learn a classifier for the unlabeled target domain by leveraging knowledge from a labeled source domain with a different but related distribution. Many existing approaches typically learn a domain-invariant representation space by directly matching the marginal distributions of the two domains. However, they ignore exploring the underlying discriminative features of the target data and align the cross-domain discriminative features, which may lead to suboptimal performance. To tackle these two issues simultaneously, this paper presents a Joint Clustering and Discriminative Feature Alignment (JCDFA) approach for UDA, which is capable of naturally unifying the mining of discriminative features and the alignment of class-discriminative features into one single framework. Specifically, in order to mine the intrinsic discriminative information of the unlabeled target data, JCDFA jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data, where the classification of the source domain can guide the clustering learning of the target domain to locate the object category. We then conduct the cross-domain discriminative feature alignment by separately optimizing two new metrics: 1) an extended supervised contrastive learning, i.e., semi-supervised contrastive learning 2) an extended Maximum Mean Discrepancy (MMD), i.e., conditional MMD, explicitly minimizing the intra-class dispersion and maximizing the inter-class compactness. When these two procedures, i.e., discriminative features mining and alignment are integrated into one framework, they tend to benefit from each other to enhance the final performance from a cooperative learning perspective. Experiments are conducted on four real-world benchmarks ( e.g., Office-31, ImageCLEF-DA, Office-Home and VisDA-C). All the results demonstrate that our JCDFA can obtain remarkable margins over state-of-the-art domain adaptation methods. Comprehensive ablation studies also verify the importance of each key component of our proposed algorithm and the effectiveness of combining two learning strategies into a framework. see all
|
Series: |
IEEE transactions on image processing |
ISSN: | 1057-7149 |
ISSN-E: | 1941-0042 |
ISSN-L: | 1057-7149 |
Volume: | 30 |
Pages: | 7842 - 7855 |
DOI: | 10.1109/TIP.2021.3109530 |
OADOI: | https://oadoi.org/10.1109/TIP.2021.3109530 |
Type of Publication: |
A1 Journal article – refereed |
Field of Science: |
113 Computer and information sciences |
Subjects: | |
Funding: |
This work was partially supported by the Academy of Finland under grant 331883, and the National Natural Science Foundation of China under Grant 61872379, 62022091 and 71701205. |
Academy of Finland Grant Number: |
331883 |
Detailed Information: |
331883 (Academy of Finland Funding decision) |
Copyright information: |
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |