University of Oulu

Liu, X., Yao, J., Hong, X., Huang, X., Zhou, Z., Qi, C., Zhao, G. (2018) Background Subtraction Using Spatio-Temporal Group Sparsity Recovery. IEEE Transactions on Circuits and Systems for Video Technology, 28 (8), 1737-1751. doi:10.1109/TCSVT.2017.2697972

Background subtraction using spatio-temporal group sparsity recovery

Saved in:
Author: Liu, Xin1; Yao, Jiawen2; Hong, Xiaopeng1;
Organizations: 1Center for Machine Vision and Signal Analysis, University of Oulu
2Department of Computer Science and Engineering, The University of Texas at Arlington
3School of Electronics and Information Engineering, Xi’an Jiaotong University
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 4.4 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2018102638831
Language: English
Published: Institute of Electrical and Electronics Engineers, 2017
Publish Date: 2018-10-26
Description:

Abstract

Background subtraction is a key step in a wide spectrum of video applications, such as object tracking and human behavior analysis. Compressive sensing-based methods, which make little specific assumptions about the background, have recently attracted wide attention in background subtraction. Within the framework of compressive sensing, background subtraction is solved as a decomposition and optimization problem, where the foreground is typically modeled as pixel-wised sparse outliers. However, in real videos, foreground pixels are often not randomly distributed, but instead, group clustered. Moreover, due to costly computational expenses, most compressive sensing-based methods are unable to process frames online. In this paper, we take into account the group properties of foreground signals in both spatial and temporal domains, and propose a greedy pursuit-based method called spatio-temporal group sparsity recovery, which prunes data residues in an iterative process, according to both sparsity and group clustering priors, rather than merely sparsity. Furthermore, a random strategy for background dictionary learning is used to handle complex background variations, while foreground-free training is not required. Finally, we propose a two-pass framework to achieve online processing. The proposed method is validated on multiple challenging video sequences. Experiments demonstrate that our approach effectively works on a wide range of complex scenarios and achieves a state-of-the-art performance with far fewer computations.

see all

Series: IEEE transactions on circuits and systems for video technology
ISSN: 1051-8215
ISSN-E: 1558-2205
ISSN-L: 1051-8215
Volume: 28
Issue: 8
Pages: 1737 - 1751
DOI: 10.1109/TCSVT.2017.2697972
OADOI: https://oadoi.org/10.1109/TCSVT.2017.2697972
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
213 Electronic, automation and communications engineering, electronics
Subjects:
Copyright information: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.