Absent multiple kernel learning algorithms |
|
Author: | Liu, Xinwang1; Wang, Lei2; Zhu, Xinzhong3,4; |
Organizations: |
1College of Computer, National University of Defense Technology, Changsha 410073, China 2School of Computing and Information Technology, University of Wollongong, NSW 2522, Australia 3College of Mathematics, Physics and Information Engineering, Zhejiang Normal University, Jinhua 321004, China
4Research Institute of Ningbo Cixing Co. Ltd, Ningbo 315336, China
5UBTECH Sydney Artificial Intelligence Centre 6School of Information Technologies in the Faculty of Engineering and Information Technologies at The University of Sydney, J12 Cleveland St, Darlington NSW 2008, Australia 7College of System Engineering, National University of Defense Technology, Changsha 410073, China 8University of Oulu, Finland 9Dongguan University of Technology, Guangdong 523808, China |
Format: | article |
Version: | accepted version |
Access: | open |
Online Access: | PDF Full Text (PDF, 5.4 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe201902256178 |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers,
2019
|
Publish Date: | 2019-02-25 |
Description: |
AbstractMultiple kernel learning (MKL) has been intensively studied during the past decade. It optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels of the samples are missing, which is not uncommon in practical applications. This paper proposes three absent MKL (AMKL) algorithms to address this issue. Different from existing approaches where missing channels are firstly imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithms directly classify each sample based on its observed channels, without performing imputation. Specifically, we define a margin for each sample in its own relevant space, a space corresponding to the observed channels of that sample. The proposed AMKL algorithms then maximize the minimum of all sample-based margins, and this leads to a difficult optimization problem. We first provide two two-step iterative algorithms to approximately solve this problem. After that, we show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. In addition, we provide a generalization error bound to justify the proposed AMKL algorithms from a theoretical perspective. Extensive experiments are conducted on nine UCI and six MKL benchmark datasets to compare the proposed algorithms with existing imputation-based methods. As demonstrated, our algorithms achieve superior performance and the improvement is more significant with the increase of missing ratio. see all
|
Series: |
IEEE transactions on pattern analysis and machine intelligence |
ISSN: | 0162-8828 |
ISSN-E: | 2160-9292 |
ISSN-L: | 0162-8828 |
Volume: | 42 |
Issue: | 6 |
DOI: | 10.1109/TPAMI.2019.2895608 |
OADOI: | https://oadoi.org/10.1109/TPAMI.2019.2895608 |
Type of Publication: |
A1 Journal article – refereed |
Field of Science: |
113 Computer and information sciences |
Subjects: | |
Funding: |
This work was supported by National Key R&D Program of China 2018YFB1003203, the Natural Science Foundation of China (project no. 61773392, 61672528 and 61701451). |
Copyright information: |
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |