University of Oulu

M. Bahri, Y. Panagakis and S. Zafeiriou, "Robust Kronecker Component Analysis," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 10, pp. 2365-2379, 1 Oct. 2019, doi: 10.1109/TPAMI.2018.2881476

Robust Kronecker component analysis

Saved in:
Author: Bahri, Mehdi1; Panagakis, Yannis1,2; Zafeiriou, Stefanos1,3
Organizations: 1Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
2Middlesex University, LondonNW4 4BT, United Kingdom
3University of Oulu, Oulu 90014, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 2.6 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2020060540834
Language: English
Published: Institute of Electrical and Electronics Engineers, 2019
Publish Date: 2020-06-05
Description:

Abstract

Dictionary learning and component analysis models are fundamental for learning compact representations that are relevant to a given task (feature extraction, dimensionality reduction, denoising, etc.). The model complexity is encoded by means of specific structure, such as sparsity, low-rankness, or nonnegativity. Unfortunately, approaches like K-SVD — that learn dictionaries for sparse coding via Singular Value Decomposition (SVD) — are hard to scale to high-volume and high-dimensional visual data, and fragile in the presence of outliers. Conversely, robust component analysis methods such as the Robust Principal Component Analysis (RPCA) are able to recover low-complexity (e.g., low-rank) representations from data corrupted with noise of unknown magnitude and support, but do not provide a dictionary that respects the structure of the data (e.g., images), and also involve expensive computations. In this paper, we propose a novel Kronecker-decomposable component analysis model, coined as Robust Kronecker Component Analysis (RKCA), that combines ideas from sparse dictionary learning and robust component analysis. RKCA has several appealing properties, including robustness to gross corruption; it can be used for low-rank modeling, and leverages separability to solve significantly smaller problems. We design an efficient learning algorithm by drawing links with a restricted form of tensor factorization, and analyze its optimality and low-rankness properties. The effectiveness of the proposed approach is demonstrated on real-world applications, namely background subtraction and image denoising and completion, by performing a thorough comparison with the current state of the art.

see all

Series: IEEE transactions on pattern analysis and machine intelligence
ISSN: 0162-8828
ISSN-E: 2160-9292
ISSN-L: 0162-8828
Volume: 41
Issue: 10
Pages: 2365 - 2379
DOI: 10.1109/TPAMI.2018.2881476
OADOI: https://oadoi.org/10.1109/TPAMI.2018.2881476
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: Mehdi Bahri was partially funded by the Department of Computing, Imperial College London. The work of Y. Panagakis has been partially supported by the European Community Horizon 2020 [H2020/2014-2020] under Grant Agreement No. 645094 (SEWA). S. Zafeiriou was partially funded by EPSRC Project EP/N007743/1 (FACER2VM) and also partiallyn funded by a Google Faculty Research Award.
Copyright information: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.