University of Oulu

Bahador, N., Jokelainen, J., Mustola, S., & Kortelainen, J. (2021). Multimodal spatio-temporal-spectral fusion for deep learning applications in physiological time series processing: A case study in monitoring the depth of anesthesia. Information Fusion, 73, 125–143. https://doi.org/10.1016/j.inffus.2021.03.001

Multimodal spatio-temporal-spectral fusion for deep learning applications in physiological time series processing : a case study in monitoring the depth of anesthesia

Saved in:
Author: Bahador, Nooshin1; Jokelainen, Jarno2; Mustola, Seppo2;
Organizations: 1Physiological Signal Analysis Team, Center for Machine Vision and Signal Analysis, MRC Oulu, University of Oulu, Oulu, Finland
2Department of Anesthesia, Intensive Care and Pain Medicine at South Carelia Central Hospital, Lappeenranta, Finland
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 22.8 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2022031022836
Language: English
Published: Elsevier, 2021
Publish Date: 2022-03-10
Description:

Abstract

Physiological signals processing brings challenges including dimensionality (due to the number of channels), heterogeneity (due to the different range of values) and multimodality (due to the different sources). In this regard, the current study intended, first, to use time-frequency ridge mapping in exploring the use of fused information from joint EEG-ECG recordings in tracking the transition between different states of anesthesia. Second, it investigated the effectiveness of pre-trained state-of-the-art deep learning architectures for learning discriminative features in the fused data in order to classify the states during anesthesia. Experimental data from healthy-brain patients undergoing operation (N = 20) were used for this study. Data was recorded from the BrainStatus device with single ECG and 10 EEG channels. The obtained results support the hypothesis that not only can ridge fusion capture temporal-spectral progression patterns across all modalities and channels, but also this simplified interpretation of time-frequency representation accelerates the training process and yet improves significantly the efficiency of deep models. Classification outcomes demonstrates that this fusion could yields a better performance, in terms of 94.14% precision and 0.28 s prediction time, compared to commonly used data-level fusing methods. To conclude, the proposed fusion technique provides the possibility of embedding time-frequency information as well as spatial dependencies over modalities and channels in just a 2D array. This integration technique shows significant benefit in obtaining a more unified and global view of different aspects of physiological data at hand, and yet maintaining the desired performance level in decision making.

see all

Series: Information fusion
ISSN: 1566-2535
ISSN-E: 1872-6305
ISSN-L: 1566-2535
Volume: 73
Pages: 125 - 143
DOI: 10.1016/j.inffus.2021.03.001
OADOI: https://oadoi.org/10.1016/j.inffus.2021.03.001
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
213 Electronic, automation and communications engineering, electronics
3126 Surgery, anesthesiology, intensive care, radiology
Subjects:
Funding: This work was supported by 1- a grant (No. 308935) from the Academy of Finland and Infotech, 2- Orion Research Foundation sr, 3- Walter Ahlström Foundation.
Academy of Finland Grant Number: 308935
Detailed Information: 308935 (Academy of Finland Funding decision)
Copyright information: © 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
  https://creativecommons.org/licenses/by/4.0/