SAM : self-augmentation mechanism for COVID-19 detection using chest X-ray images |
|
Author: | Muhammad, Usman1; Hoque, Md. Ziaul1; Oussalah, Mourad1,2; |
Organizations: |
1Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland 2Medical Imaging, Physics, and Technology (MIPT), Faculty of Medicine, University of Oulu, Finland 3Department of Pathology and Anatomical Sciences, University at Buffalo, USA |
Format: | article |
Version: | published version |
Access: | open |
Online Access: | PDF Full Text (PDF, 1.7 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2022041228462 |
Language: | English |
Published: |
Elsevier,
2022
|
Publish Date: | 2022-04-12 |
Description: |
AbstractCOVID-19 is a rapidly spreading viral disease and has affected over 100 countries worldwide. The numbers of casualties and cases of infection have escalated particularly in countries with weakened healthcare systems. Recently, reverse transcription-polymerase chain reaction (RT-PCR) is the test of choice for diagnosing COVID-19. However, current evidence suggests that COVID-19 infected patients are mostly stimulated from a lung infection after coming in contact with this virus. Therefore, chest X-ray (i.e., radiography) and chest CT can be a surrogate in some countries where PCR is not readily available. This has forced the scientific community to detect COVID-19 infection from X-ray images and recently proposed machine learning methods offer great promise for fast and accurate detection. Deep learning with convolutional neural networks (CNNs) has been successfully applied to radiological imaging for improving the accuracy of diagnosis. However, the performance remains limited due to the lack of representative X-ray images available in public benchmark datasets. To alleviate this issue, we propose a self-augmentation mechanism for data augmentation in the feature space rather than in the data space using reconstruction independent component analysis (RICA). Specifically, a unified architecture is proposed which contains a deep convolutional neural network (CNN), a feature augmentation mechanism, and a bidirectional LSTM (BiLSTM). The CNN provides the high-level features extracted at the pooling layer where the augmentation mechanism chooses the most relevant features and generates low-dimensional augmented features. Finally, BiLSTM is used to classify the processed sequential information. We conducted experiments on three publicly available databases to show that the proposed approach achieves the state-of-the-art results with accuracy of 97%, 84% and 98%. Explainability analysis has been carried out using feature visualization through PCA projection and t-SNE plots. see all
|
Series: |
Knowledge-based systems |
ISSN: | 0950-7051 |
ISSN-E: | 1872-7409 |
ISSN-L: | 0950-7051 |
Volume: | 241 |
Article number: | 108207 |
DOI: | 10.1016/j.knosys.2022.108207 |
OADOI: | https://oadoi.org/10.1016/j.knosys.2022.108207 |
Type of Publication: |
A1 Journal article – refereed |
Field of Science: |
113 Computer and information sciences 217 Medical engineering |
Subjects: | |
Funding: |
The research work of this paper was supported by the Center for Machine Vision and Signal Analysis (CMVS) in the Faculty of Information Technology and Electrical Engineering (ITEE) at University of Oulu, Finland. The authors are grateful to the Academy of Finland Profi5 DigiHealth project (#326291). |
Copyright information: |
© 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). |
https://creativecommons.org/licenses/by/4.0/ |