University of Oulu

H. Chen, H. Tang, H. Shi, W. Peng, N. Sebe and G. Zhao, "Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer," 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 2021, pp. 8610-8619, doi: 10.1109/ICCV48922.2021.00851.

Intrinsic-extrinsic preserved GANs for unsupervised 3D pose transfer

Saved in:
Author: Chen, Haoyu1; Tang, Hao2; Shi, Henglin1;
Organizations: 1CMVS, University of Oulu
2Computer Vision Lab, ETH Zurich
3DISI, University of Trento
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 3.3 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2023033134147
Language: English
Published: IEEE Computer Society, 2021
Publish Date: 2023-03-31
Description:

Abstract

With the strength of deep generative models, 3D pose transfer regains intensive research interests in recent years. Existing methods mainly rely on a variety of constraints to achieve the pose transfer over 3D meshes, e.g., the need for manually encoding for shape and pose disentanglement. In this paper, we present an unsupervised approach to conduct the pose transfer between any arbitrate given 3D meshes. Specifically, a novel Intrinsic-Extrinsic Preserved Generative Adversarial Network (IEP-GAN) is presented for both intrinsic (i.e., shape) and extrinsic (i.e., pose) information preservation. Extrinsically, we propose a co-occurrence discriminator to capture the structural/pose invariance from distinct Laplacians of the mesh. Meanwhile, intrinsically, a local intrinsic-preserved loss is introduced to preserve the geodesic priors while avoiding heavy computations. At last, we show the possibility of using IEP-GAN to manipulate 3D human meshes in various ways, including pose transfer, identity swapping and pose interpolation with latent code vector arithmetic. The extensive experiments on various 3D datasets of humans, animals and hands qualitatively and quantitatively demonstrate the generality of our approach. Our proposed model produces better results and is substantially more efficient compared to recent state-of-the-art methods. Code is available: https://github.com/mikecheninoulu/Unsupervised_IEPGAN

see all

Series: IEEE International Conference on Computer Vision
ISSN: 1550-5499
ISSN-E: 2380-7504
ISSN-L: 1550-5499
ISBN: 978-1-6654-2812-5
ISBN Print: 978-1-6654-2813-2
Pages: 8610 - 8619
DOI: 10.1109/iccv48922.2021.00851
OADOI: https://oadoi.org/10.1109/iccv48922.2021.00851
Host publication: 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
Conference: IEEE International Conference on Computer Vision
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported by the Academy of Finland for project MiGA (grant 316765), ICT 2023 project (grant 328115), EU H2020 SPRING (No.871245) and EU H2020 AI4Media (No.951911) projects, the China Scholarship Council, and Infotech Oulu. As well, the authors wish to acknowledge CSC-IT Center for Science, Finland, for computational resources.
Academy of Finland Grant Number: 316765
328115
Detailed Information: 316765 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
Copyright information: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.