Antti Isosalo, Henrik Mustonen, Topi Turunen, Pieta S. Ipatti, Jarmo Reponen, Miika T. Nieminen, and Satu I. Inkinen "Evaluation of different convolutional neural network encoder-decoder architectures for breast mass segmentation", Proc. SPIE 12037, Medical Imaging 2022: Imaging Informatics for Healthcare, Research, and Applications, 120370W (4 April 2022); https://doi.org/10.1117/12.2628190
Evaluation of different convolutional neural network encoder-decoder architectures for breast mass segmentation
|Author:||Isosalo, Antti1; Mustonen, Henrik1; Turunen, Topi1;|
1Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
2Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
3Medical Research Center Oulu, University of Oulu and Oulu University Hospital, Oulu, Finland
4Department of Radiology, HUS Diagnostic Center, Helsinki University and Helsinki University Hospital, Helsinki, Finland
|Online Access:||PDF Full Text (PDF, 1.2 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2022061747588
|Publish Date:|| 2022-06-17
In this work, we study convolutional neural network encoder-decoder architectures with pre-trained encoder weights for breast mass segmentation from digital screening mammograms. To automatically detect breast cancer, one fundamental task to achieve is the segmentation of the potential abnormal regions. Our objective was to find out whether encoder weights trained for breast cancer evaluation in comparison to those learned from natural images can yield a better model initialization, and furthermore improved segmentation results. We applied transfer learning and initialized the encoder, namely ResNet34 and ResNet22, with ImageNet weights and weights learned from breast cancer classification, respectively. A large clinically-realistic Finnish mammography screening dataset was utilized in model training and evaluation. Furthermore, an independent Portuguese INbreast dataset was utilized for further evaluation of the models. 5-fold cross-validation was applied for training. Soft Focal Tversky loss was used to calculate the model training time error. Dice score and Intersection over Union were used in quantifying the degree of similarity between the annotated and automatically produced segmentation masks. The best performing encoder-decoder with ResNet34 encoder tailed with U-Net decoder yielded Dice scores (mean±SD) of 0.7677±0.2134 for the Finnish dataset, and ResNet22 encoder tailed with U-Net decoder 0.8430±0.1091 for the INbreast dataset. No large differences in segmentation accuracy were found between the encoders initialized with weights pre-trained from breast cancer evaluation, and of those from natural image classification.
Proceedings of SPIE
Proc. SPIE 12037, Medical Imaging 2022: Imaging Informatics for Healthcare, Research, and Applications, 120370W (4 April 2022)
SPIE Medical Imaging
|Type of Publication:||
B3 Article in conference proceedings
|Field of Science:||
113 Computer and information sciences
217 Medical engineering
Miika T. Nieminen received funding from the Jane and Aatos Erkko Foundation and the Technology Industries of Finland Centennial Foundation. Antti Isosalo received funding from the Jenny and Antti Wihuri Foundation (grant no. 00210099) and the Thelma Mäkikyrö Foundation. Satu I. Inkinen received funding from the Academy of Finland (project no. 316899).
|Academy of Finland Grant Number:||
316899 (Academy of Finland Funding decision)
© (2022) Society of Photo-Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited.