University of Oulu

Q. Luo, J. Su, C. Yang, W. Gui, O. Silvén and L. Liu, "CAT-EDNet: Cross-Attention Transformer-Based Encoder–Decoder Network for Salient Defect Detection of Strip Steel Surface," in IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-13, 2022, Art no. 5009813, doi: 10.1109/TIM.2022.3165270

CAT-EDNet : cross-attention transformer-based encoder–decoder network for salient defect detection of strip steel surface

Saved in:
Author: Luo, Qiwu1; Su, Jiaojiao1; Yang, Chunhua1;
Organizations: 1School of Automation, Central South University, Changsha 410083, China
2Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, 90014 Oulu, Finland
3College of System Engineering, National University of Defense Technology, Changsha 410073, China
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 2.7 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2022083056745
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2022-08-30
Description:

Abstract

The morphologies of various surface defects on strip steel suffer from oil stain, water drops, steel textures, and erratic illumination. It is still challenging to recognize defect boundary precisely from cluttered backgrounds. This article emphasizes such a fact that skip connections between encoder and decoder are not equally effective, attempts to adaptively allocate the aggregation weights that represent differentiated information entropy values in channelwise, by importing a stack of cross-attention transformer (CAT) into the encoder–decoder network (EDNet). Besides, a cross-attention refinement module (CARM) is constructed closely after the decoder to further optimize the coarse saliency maps. This newly nominated CAT-EDNet can well address the semantic gap issue among the multiscale features for its multihead attention structure. The CAT-EDNet performs best on insuring defect integrity and maintaining defect boundary details when compared with 12 state-of-the-arts, and the detection efficiency is at 28 fps even under the noise interfered scenario.

see all

Series: IEEE transactions on instrumentation and measurement
ISSN: 0018-9456
ISSN-E: 1557-9662
ISSN-L: 0018-9456
Volume: 71
Pages: 1 - 13
Article number: 9757932
DOI: 10.1109/tim.2022.3165270
OADOI: https://oadoi.org/10.1109/tim.2022.3165270
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported in part by the National Natural Science Foundation of China under Grant 61973323 and Grant 6201101509, in part by the Hunan Provincial Natural Science Foundation under Grant 2021JJ20078, and in part by the Science and Technology Innovation Program of Hunan Province under Grant 2021RC3019 and Grant 2021RC1001.
Copyright information: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.