Generative adversarial networks improve interior computed tomography angiography reconstruction
|Author:||Ketola, Juuso H. J.1,2; Heino, Helinä1; Juntunen, Mikael A. K.1,3;|
1Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland
2The South Savo Social and Health Care Authority, Mikkeli Central Hospital, FI-50100, Finland
3Department of Diagnostic Radiology, Oulu University Hospital, FI-90029, Finland
4Medical Research Center Oulu, University of Oulu and Oulu University Hospital, FI-90014, Finland
5Department of Mathematics and Statistics, University of Helsinki, Helsinki, FI-00014, Finland
|Online Access:||PDF Full Text (PDF, 2 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2021120859514
|Publish Date:|| 2021-12-08
In interior computed tomography (CT), the x-ray beam is collimated to a limited field-of-view (FOV) (e.g. the volume of the heart) to decrease exposure to adjacent organs, but the resulting image has a severe truncation artifact when reconstructed with traditional filtered back-projection (FBP) type algorithms. In some examinations, such as cardiac or dentomaxillofacial imaging, interior CT could be used to achieve further dose reductions. In this work, we describe a deep learning (DL) method to obtain artifact-free images from interior CT angiography. Our method employs the Pix2Pix generative adversarial network (GAN) in a two-stage process: (1) An extended sinogram is computed from a truncated sinogram with one GAN model, and (2) the FBP reconstruction obtained from that extended sinogram is used as an input to another GAN model that improves the quality of the interior reconstruction. Our double GAN (DGAN) model was trained with 10 000 truncated sinograms simulated from real computed tomography angiography slice images. Truncated sinograms (input) were used with original slice images (target) in training to yield an improved reconstruction (output). DGAN performance was compared with the adaptive de-truncation method, total variation regularization, and two reference DL methods: FBPConvNet, and U-Net-based sinogram extension (ES-UNet). Our DGAN method and ES-UNet yielded the best root-mean-squared error (RMSE) (0.03 ± 0.01), and structural similarity index (SSIM) (0.92 ± 0.02) values, and reference DL methods also yielded good results. Furthermore, we performed an extended FOV analysis by increasing the reconstruction area by 10% and 20%. In both cases, the DGAN approach yielded best results at RMSE (0.03 ± 0.01 and 0.04 ± 0.01 for the 10% and 20% cases, respectively), peak signal-to-noise ratio (PSNR) (30.5 ± 2.6 dB and 28.6 ± 2.6 dB), and SSIM (0.90 ± 0.02 and 0.87 ± 0.02). In conclusion, our method was able to not only reconstruct the interior region with improved image quality, but also extend the reconstructed FOV by 20%.
Biomedical physics & engineering express
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
3126 Surgery, anesthesiology, intensive care, radiology
217 Medical engineering
The authors gratefully acknowledge financial support from Business Finland (project no. 1392/31/2016), Technology Industries of Finland Centennial Foundation, Jane & Aatos Erkko Foundation, Academy of Finland (project no. 316899), and Tauno Tönning Foundation (grants no. 20180083 and 20190204).
|Academy of Finland Grant Number:||
316899 (Academy of Finland Funding decision)
© 2021 The Author(s). Published by IOP Publishing Ltd. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.