University of Oulu

D. Beddiar, M. Oussalah and S. Tapio, "Explainability for Medical Image Captioning," 2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Salzburg, Austria, 2022, pp. 1-6, doi: 10.1109/IPTA54936.2022.9784146.

Explainability for medical image captioning

Saved in:
Author: Beddiar, Djamila1; Oussalah, Mourad1,2; Seppanen, Tapio1
Organizations: 1University of Oulu, CMVS Oulu, Finland
2MIPT, Faculty of Medicine Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 3.5 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202301162915
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2023-01-16
Description:

Abstract

Medical image captioning is the process of generating clinically significant descriptions to medical images, which has many applications among which medical report generation is the most frequent one. In general, automatic captioning of medical images is of great interest for medical experts since it offers assistance in diagnosis, disease treatment and automating the workflow of the health practitioners. Recently, many efforts have been put forward to obtain accurate descriptions but medical image captioning still provides weak and incorrect descriptions. To alleviate this issue, it is important to explain why the model produced a particular caption based on some specific features. This is performed through Artificial Intelligence Explainability (XAI), which aims to unfold the ‘black-box’ feature of deep-learning based models. We present in this paper an explainable module for medical image captioning that provides a sound interpretation of our attention-based encoder-decoder model by explaining the correspondence between visual features and semantic features. We exploit for that, self-attention to compute word importance of semantic features and visual attention to compute relevant regions of the image that correspond to each generated word of the caption in addition to visualization of visual features extracted at each layer of the Convolutional Neural Network (CNN) encoder. We finally evaluate our model using the ImageCLEF medical captioning dataset.

see all

Series: Proceedings. International Workshops on Image Processing Theory, Tools, and Applications
ISSN: 2154-5111
ISSN-E: 2154-512X
ISSN-L: 2154-5111
ISBN: 978-1-6654-6964-7
ISBN Print: 978-1-6654-6965-4
Pages: 1 - 6
Article number: 9784146
DOI: 10.1109/ipta54936.2022.9784146
OADOI: https://oadoi.org/10.1109/ipta54936.2022.9784146
Host publication: 2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Salzburg, Austria 19-22 April 2022
Conference: International Conference on Image Processing Theory, Tools and Applications
Type of Publication: A4 Article in conference proceedings
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: This work is supported by the Academy of Finland Profi5 DigiHealth project (#326291) and the European Young-sters Resilience through Serious Games, under the Internal Security Fund-Police action: 823701-ISFP-2017-AG-RAD grant, which are gratefully acknowledged.
Copyright information: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.