University of Oulu

Q. Liu, X. Hong, B. Zou, J. Chen, Z. Chen and G. Zhao, "Hierarchical Contour Closure-Based Holistic Salient Object Detection," in IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4537-4552, Sept. 2017. doi: 10.1109/TIP.2017.2703081

Hierarchical contour closure-based holistic salient object detection

Saved in:
Author: Liu, Qing1; Hong, Xiaopeng2; Zou, Beiji1;
Organizations: 1School of Information Science and Engineering, Central South University, Changsha, Hunan, China
2Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 4.2 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe201903057132
Language: English
Published: Institute of Electrical and Electronics Engineers, 2017
Publish Date: 2019-03-05
Description:

Abstract

Most existing salient object detection methods compute the saliency for pixels, patches, or superpixels by contrast. Such fine-grained contrast-based salient object detection methods are stuck with saliency attenuation of the salient object and saliency overestimation of the background when the image is complicated. To better compute the saliency for complicated images, we propose a hierarchical contour closure-based holistic salient object detection method, in which two saliency cues, i.e., closure completeness and closure reliability, are thoroughly exploited. The former pops out the holistic homogeneous regions bounded by completely closed outer contours, and the latter highlights the holistic homogeneous regions bounded by averagely highly reliable outer contours. Accordingly, we propose two computational schemes to compute the corresponding saliency maps in a hierarchical segmentation space. Finally, we propose a framework to combine the two saliency maps, obtaining the final saliency map. Experimental results on three publicly available datasets show that even each single saliency map is able to reach the state-of-the-art performance. Furthermore, our framework, which combines two saliency maps, outperforms the state of the arts. Additionally, we show that the proposed framework can be easily used to extend existing methods and further improve their performances substantially.

see all

Series: IEEE transactions on image processing
ISSN: 1057-7149
ISSN-E: 1941-0042
ISSN-L: 1057-7149
Volume: 26
Issue: 9
Pages: 4537 - 4552
DOI: 10.1109/TIP.2017.2703081
OADOI: https://oadoi.org/10.1109/TIP.2017.2703081
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
222 Other engineering and technologies
Subjects:
Funding: The work of Q. Liu was supported by the Scholarship from the China Scholarship Council. The work of X. Hong was supported in part by the Academy of Finland, Tekes Fidipro Program, and Infotech Oulu and in part by the National Natural Science Foundation of China under Grant 61572205. The work of B. Zou and Z. Chen was supported by the National Natural Science Foundation of China under Grant 61573380 and Grant 61672542. The work of J. Chen and G. Zhao was supported by the Academy of Finland, Tekes Fidipro Program, and Infotech Oulu.
Copyright information: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.