University of Oulu

Ji, L., Zhu, Q., Zhang, Y., Yin, J., Wei, R., Xiao, J., Xiao, D., & Zhao, G. (2022). Cross-domain heterogeneous residual network for single image super-resolution. Neural Networks, 149, 84–94. https://doi.org/10.1016/j.neunet.2022.02.008

Cross-domain heterogeneous residual network for single image super-resolution

Saved in:
Author: Ji, Li1; Zhu, Qinghui1; Zhang, Yongqin1,2;
Organizations: 1School of Information Science and Technology, Northwest University, Xi’an 710127, China
2CAS Key Laboratory of Spectral Imaging Technology, Xi’an 710119, China
3Electronic Information School, Wuhan University, Wuhan 430072, China
4School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
5Center for Machine Vision and Signal Analysis, University of Oulu, Oulu 90014, Finland
Format: article
Version: accepted version
Access: embargoed
Persistent link: http://urn.fi/urn:nbn:fi-fe2023032433134
Language: English
Published: Elsevier, 2022
Publish Date: 2024-02-11
Description:

Abstract

Single image super-resolution is an ill-posed problem, whose purpose is to acquire a high-resolution image from its degraded observation. Existing deep learning-based methods are compromised on their performance and speed due to the heavy design (i.e., huge model size) of networks. In this paper, we propose a novel high-performance cross-domain heterogeneous residual network for super-resolved image reconstruction. Our network models heterogeneous residuals between different feature layers by hierarchical residual learning. In outer residual learning, dual-domain enhancement modules extract the frequency-domain information to reinforce the space-domain features of network mapping. In middle residual learning, wide-activated residual-in-residual dense blocks are constructed by concatenating the outputs from previous blocks as the inputs into all subsequent blocks for better parameter efficacy. In inner residual learning, wide-activated residual attention blocks are introduced to capture direction- and location-aware feature maps. The proposed method was evaluated on four benchmark datasets, indicating that it can construct the high-quality super-resolved images and achieve the state-of-the-art performance. Code and pre-trained models are available at https://github.com/zhangyongqin/HRN.

see all

Series: Neural networks
ISSN: 0893-6080
ISSN-E: 1879-2782
ISSN-L: 0893-6080
Volume: 149
Pages: 84 - 94
DOI: 10.1016/j.neunet.2022.02.008
OADOI: https://oadoi.org/10.1016/j.neunet.2022.02.008
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported by the Natural Science Basic Research Program of Shaanxi, China (Program No. 2019JM-103), the New Star of Youth Science and Technology of Shaanxi Province, China (Grant No. 2020KJXX-007), the Social Science Foundation of Shaanxi Province, China (Grant No. 2019H010), the Open Research Fund of CAS Key Laboratory of Spectral Imaging Technology, China (Grant No. LSIT201920W), and the National Natural Science Foundation of China (Grant No. 62173270).
Copyright information: © 2022. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http:/creativecommons.org/licenses/by-nc-nd/4.0/
  https://creativecommons.org/licenses/by-nc-nd/4.0/