Deep learning for magnification independent breast cancer histopathology image classification |
|
Author: | Bayramoglu, Neslihan1; Kannala, Juho2; Heikkilä, Janne3 |
Organizations: |
1Center for Machine Vision and Signal Analysis, University of Oulu, Finland 2Aalto University Department of Computer Science Finland 3Center for Machine Vision and Signal Analysis University of Oulu, Finland |
Format: | article |
Version: | accepted version |
Access: | open |
Online Access: | PDF Full Text (PDF, 3.6 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2019090426597 |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers,
2017
|
Publish Date: | 2019-09-04 |
Description: |
AbstractMicroscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data. see all
|
ISBN: | 978-1-5090-4847-2 |
ISBN Print: | 978-1-5090-4848-9 |
Pages: | 2440 - 2445 |
DOI: | 10.1109/ICPR.2016.7900002 |
OADOI: | https://oadoi.org/10.1109/ICPR.2016.7900002 |
Host publication: |
2016 Proceedings of 23rd Internaltional conference on Pattern Recognition (ICPR 2016) |
Conference: |
International Conference on Pattern Recognition |
Type of Publication: |
A4 Article in conference proceedings |
Field of Science: |
113 Computer and information sciences 3111 Biomedicine 213 Electronic, automation and communications engineering, electronics |
Subjects: | |
Copyright information: |
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |