Robust kronecker-decomposable component analysis for low-rank modeling |
|
Author: | Bahri, Mehdi1; Panagakis, Yannis1,2; Zafeiriou, Stefanos1,3 |
Organizations: |
1Imperial College London, UK 2Middlesex University London, UK 3University of Oulu, Finland |
Format: | article |
Version: | accepted version |
Access: | open |
Online Access: | PDF Full Text (PDF, 1.8 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2019100330983 |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers,
2017
|
Publish Date: | 2019-10-03 |
Description: |
AbstractDictionary learning and component analysis are part of one of the most well-studied and active research fields, at the intersection of signal and image processing, computer vision, and statistical machine learning. In dictionary learning, the current methods of choice are arguably K-SVD and its variants, which learn a dictionary (i.e., a decomposition) for sparse coding via Singular Value Decomposition. In robust component analysis, leading methods derive from Principal Component Pursuit (PCP), which recovers a low-rank matrix from sparse corruptions of unknown magnitude and support. However, K-SVD is sensitive to the presence of noise and outliers in the training set. Additionally, PCP does not provide a dictionary that respects the structure of the data (e.g., images), and requires expensive SVD computations when solved by convex relaxation. In this paper, we introduce a new robust decomposition of images by combining ideas from sparse dictionary learning and PCP. We propose a novel Kronecker-decomposable component analysis which is robust to gross corruption, can be used for low-rank modeling, and leverages separability to solve significantly smaller problems. We design an efficient learning algorithm by drawing links with a restricted form of tensor factorization. The effectiveness of the proposed approach is demonstrated on real-world applications, namely background subtraction and image denoising, by performing a thorough comparison with the current state of the art. see all
|
ISBN: | 978-1-5386-1032-9 |
ISBN Print: | 978-1-5386-1033-6 |
Pages: | 3372 - 3381 |
DOI: | 10.1109/ICCV.2017.363 |
OADOI: | https://oadoi.org/10.1109/ICCV.2017.363 |
Host publication: |
2017 IEEE International Conference on Computer Vision (ICCV) |
Conference: |
IEEE International Conference on Computer Vision |
Type of Publication: |
A4 Article in conference proceedings |
Field of Science: |
113 Computer and information sciences 213 Electronic, automation and communications engineering, electronics |
Subjects: | |
Funding: |
The work of Y. Panagakis has been partially supported by the European Community Horizon 2020 [H2020/2014-2020] under Grant Agreement No. 645094 (SEWA). S. Zafeiriou was partially funded by EPSRC Project EP/N007743/1 (FACER2VM). |
Copyright information: |
© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |