Xin Shu, Guoying Zhao, Scalable multi-label canonical correlation analysis for cross-modal retrieval, Pattern Recognition, Volume 115, 2021, 107905, ISSN 0031-3203, https://doi.org/10.1016/j.patcog.2021.107905
Scalable multi-label canonical correlation analysis for cross-modal retrieval
|Author:||Shu, Xin1,2; Zhao, Guoying2|
1College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
2Center for Machine Vision and Signal Analysis University of Oulu, Finland
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2022012811209
|Publish Date:|| 2023-02-20
Multi-label canonical correlation analysis (ml-CCA) has been developed for cross-modal retrieval. However, the computation of ml-CCA involves dense matrices eigendecomposition, which can be computationally expensive. In addition, ml-CCA only takes semantic correlation into account which ignores the cross-modal feature correlation. In this paper, we propose a novel framework to simultaneously integrate the semantic correlation and feature correlation for cross-modal retrieval. By using the semantic transformation, we show that our model can avoid computing the covariance matrix explicitly which is a huge save of computational cost. Further analysis shows that our proposed method can be solved via singular value decomposition which has linear time complexity. Experimental results on three multi-label datasets have demonstrated the accuracy and efficiency of our proposed method.
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
This work was supported by the National Natural Science Foundation of China (Grant 61806097, 61602248), the Academy of Finland for project MiGA (Grant 316765), ICT 2023 project (Grant 328115), Infotech Oulu and the China Scholarship Council.
|Academy of Finland Grant Number:||
316765 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
© 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license by http://creativecommons.org/licenses/by-nc-nd/4.0/.