University of Oulu

Y. Liu, W. Chen, L. Liu and M. S. Lew, "SwapGAN: A Multistage Generative Approach for Person-to-Person Fashion Style Transfer," in IEEE Transactions on Multimedia. doi: 10.1109/TMM.2019.2897897

SwapGAN : a multistage generative approach for person-to-person fashion style transfer

Saved in:
Author: Liu, Yu1; Chen, Wei1; Liu, Li2,3;
Organizations: 1Leiden Institute of Advanced Computer Science, Leiden University, The Netherlands
2College of System Engineering, National University of Defense Technology, China
3Center for Machine Vision and Signal Analysis, University of Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 1.9 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe201902256190
Language: English
Published: Institute of Electrical and Electronics Engineers, 2019
Publish Date: 2019-02-25
Description:

Abstract

Fashion style transfer has attracted significant attention because it both has interesting scientific challenges and it is also important to the fashion industry. This paper focuses on addressing a practical problem in fashion style transfer, person-to-person clothing swapping, which aims to visualize what the person would look like with the target clothes worn on another person instead of dressing them physically. This problem remains challenging due to varying pose deformations between different person images. In contrast to traditional nonparametric methods that blend or warp the target clothes for the reference person, in this paper we propose a multistage deep generative approach named SwapGAN that exploits three generators and one discriminator in a unified framework to fulfill the task end-to-end. The first and second generators are conditioned on a human pose map and a segmentation map, respectively, so that we can simultaneously transfer the pose style and the clothes style. In addition, the third generator is used to preserve the human body shape during the image synthesis process. The discriminator needs to distinguish two fake image pairs from the real image pair. The entire SwapGAN is trained by integrating the adversarial loss and the mask-consistency loss. The experimental results on the DeepFashion dataset demonstrate the improvements of SwapGAN over other existing approaches through both quantitative and qualitative evaluations. Moreover, we conduct ablation studies on SwapGAN and provide a detailed analysis about its effectiveness.

see all

Series: IEEE transactions on multimedia
ISSN: 1520-9210
ISSN-E: 1941-0077
ISSN-L: 1520-9210
DOI: 10.1109/TMM.2019.2897897
OADOI: https://oadoi.org/10.1109/TMM.2019.2897897
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported by the National Natural Science Foundation of China under Grant 61872379.
Copyright information: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.