A part power set model for scale-free person retrieval |
|
Author: | Shen, Yunhang1; Ji, Rongrong1,2; Hong, Xiaopeng3,4; |
Organizations: |
1Fujian Key Laboratory of Sensing and Computing for Smart City, School of Information Science and Engineering, Xiamen University, 361005, China 2Peng Cheng Laborotory, China 3Xi’an Jiaotong University, China
4University of Oulu, Finland
5Southern University of Science and Technology 6Tencent Youtu Lab, Tencent Technology (Shanghai) Co., Ltd. |
Format: | article |
Version: | published version |
Access: | open |
Online Access: | PDF Full Text (PDF, 1.6 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2020062645825 |
Language: | English |
Published: |
International Joint Conferences on Artificial Intelligence Organization,
2019
|
Publish Date: | 2020-06-26 |
Description: |
AbstractRecently, person re-identification (re-ID) has attracted increasing research attention, which has broad application prospects in video surveillance and beyond. To this end, most existing methods highly relied on well-aligned pedestrian images and hand-engineered part-based model on the coarsest feature map. In this paper, to lighten the restriction of such fixed and coarse input alignment, an end-to-end part power set model with multi-scale features is proposed, which captures the discriminative parts of pedestrians from global to local, and from coarse to fine, enabling part-based scale-free person re-ID. In particular, we first factorize the visual appearance by enumerating $k$-combinations for all $k$ of $n$ body parts to exploit rich global and partial information to learn discriminative feature maps. Then, a combination ranking module is introduced to guide the model training with all combinations of body parts, which alternates between ranking combinations and estimating an appearance model. To enable scale-free input, we further exploit the pyramid architecture of deep networks to construct multi-scale feature maps with a feasible amount of extra cost in term of memory and time. Extensive experiments on the mainstream evaluation datasets, including Market-1501, DukeMTMC-reID and CUHK03, validate that our method achieves the state-of-the-art performance. see all
|
ISBN Print: | 978-0-9992411-4-1 |
Pages: | 3397- - 3403 |
DOI: | 10.24963/ijcai.2019/471 |
OADOI: | https://oadoi.org/10.24963/ijcai.2019/471 |
Host publication: |
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, 10-16 August 2019 Macao, China |
Host publication editor: |
Kraut, Sarit |
Conference: |
International Joint Conferences on Artificial Intelligence |
Type of Publication: |
A4 Article in conference proceedings |
Field of Science: |
113 Computer and information sciences |
Subjects: | |
Funding: |
This work is supported by the National Key R&D Program (No.2017YFC0113000, and No.2016YFB1001503), Nature Science Foundation of China (No.U1705262, No.61772443, and No.61572410), Post Doctoral Innovative Talent Support Program under Grant BX201600094, China Post-Doctoral Science Foundation under Grant 2017M612134, Scientific Research Project of National Language Committee of China (Grant No. YB135-49), and Nature Science Foundation of Fujian Province, China (No. 2017J01125 and No. 2018J01106). |
Copyright information: |
© International Joint Conferences on Artificial Intelligence Organization 2019. The Definitive Version of Record can be found online at: https://doi.org/10.24963/ijcai.2019/471. |