Hu, X., Zeng, Y., Xu, X., Zhou, S., & Liu, L. (2021). Robust semi-supervised classification based on data augmented online ELMs with deep features. Knowledge-Based Systems, 229, 107307. https://doi.org/10.1016/j.knosys.2021.107307
Robust semi-supervised classification based on data augmented online ELMs with deep features
|Author:||Hu, Xiaochang1; Zeng, Yujun1; Xu, Xin1,2;|
1College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
2Laboratory of Science and Technology on Integrated Logistics support, National University of Defense Technology, Changsha 410073, China
3College of System Engineering, National University of Defense Technology, Changsha 410073, China
4Center for Machine Vision and Signal Analysis, University of Oulu, Finland
|Online Access:||PDF Full Text (PDF, 6.2 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2022022821164
|Publish Date:|| 2022-02-28
One important strategy in semi-supervised learning is to utilize the predicted pseudo labels of unlabeled data to relieve the overdependence on the ground truth of supervised learning algorithms. However, the performance of such kinds of semi-supervised methods heavily relies on the quality of pseudo labels. To address this issue, a robust semi-supervised classification method, named data augmented online extreme learning machines (ELMs) with deep features (DF-DAELM) is proposed. This method firstly extracts features and infers labels for unlabeled data through self-training. Then, with the learned features and inferred labels, two noise-robust shallow classifiers based on data augmentation (i.e., SLI-OELM and CR-OELM) are proposed to eliminate the adverse effects of noises on classifier training. Specifically, inspired by label smoothing, a data augmented method, SLI-OELM is designed based on stochastic linear interpolation to improve the robustness of classifiers based on ELMs. Furthermore, based on the smoothing assumption, the proposed CR-OELM utilizes an ℓ₂-norm consistency regularization term to implicitly weight noisy samples. Comprehensive experiments demonstrate that DF-DAELM achieves competitive or even better performance on CIFAR-10/100 and SVHN over the related state-of-the-art methods. Meanwhile, for the proposed classifiers, experimental results on the MNIST dataset with different noise levels and sample scales demonstrate their superior performance, especially when the sample scale is small (≤ 20 K) and the noise is strong (40% ~ 80% ).
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
The work was supported by the National Natural Science Foundation of China under Grants (61825305, 62006237, 62022091).
© 2021 Published by Elsevier B.V. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/.