L. Zhang et al., "Domain Knowledge Powered Two-Stream Deep Network for Few-Shot SAR Vehicle Recognition," in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-15, 2022, Art no. 5215315, doi: https://doi.org/10.1109/TGRS.2021.3116349
Domain knowledge powered two-stream deep network for few-shot SAR vehicle recognition
|Author:||Zhang, Linbin1; Leng, Xiangguang1; Feng, Sijia1;|
1State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, China
2College of System Engineering, National University of Defense Technology, Changsha 410073, China
3Center for Machine Vision and Signal Analysis, University of Oulu, 90570 Oulu, Finland
|Online Access:||PDF Full Text (PDF, 6.2 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2022030822383
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2022-03-08
Synthetic aperture radar (SAR) target recognition faces the challenge that there are very little labeled data. Although few-shot learning methods are developed to extract more information from a small amount of labeled data to avoid overfitting problems, recent few-shot or limited-data SAR target recognition algorithms overlook the unique SAR imaging mechanism. Domain knowledge-powered two-stream deep network (DKTS-N) is proposed in this study, which incorporates SAR domain knowledge related to the azimuth angle, the amplitude, and the phase data of vehicles, making it a pioneering work in few-shot SAR vehicle recognition. The two-stream deep network, extracting the features of the entire image and image patches, is proposed for more effective use of the SAR domain knowledge. To measure the structural information distance between the global and local features of vehicles, the deep Earth mover’s distance is improved to cope with the features from a two-stream deep network. Considering the sensitivity of the azimuth angle in SAR vehicle recognition, the nearest neighbor classifier replaces the structured fully connected layer for K -shot classification. All experiments are conducted under the configuration that the SARSIM and the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset work as a source and target task, respectively. Our proposed DKTS-N achieved 49.26% and 96.15% under ten-way one-shot and ten-way 25-shot, whose labeled samples are randomly selected from the training set. In standard operating condition (SOC) as well as three extended operating conditions (EOCs), DKTS-N demonstrated overwhelming advantages in accuracy and time consumption compared with other few-shot learning methods in K -shot recognition tasks.
IEEE transactions on geoscience and remote sensing
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
This work was supported in part by the National Natural Science Foundation of China under Grant 61872379 and Grant 62001480, and in part by Hunan Provincial Natural Science Foundation of China under Grant 2018JJ3613 and Grant 2021JJ40684.
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.