Autoencoding slow representations for semi-supervised data-efficient regression
|Author:||Struckmeier, Oliver1; Tiwari, Kshitij2; Kyrki, Ville1|
1Intelligent Robotics, Aalto University, Maarintie 8, 02150, Espoo, Finland
2Perception Engineering Group, University of Oulu, Erkki Koiso-Kanttilan Katu 3, 90014, Oulu, Finland
|Online Access:||PDF Full Text (PDF, 2.2 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe20231025141284
|Publish Date:|| 2023-10-25
The slowness principle is a concept inspired by the visual cortex of the brain. It postulates that the underlying generative factors of a quickly varying sensory signal change on a different, slower time scale. By applying this principle to state-of-the-art unsupervised representation learning methods one can learn a latent embedding to perform supervised downstream regression tasks more data efficient. In this paper, we compare different approaches to unsupervised slow representation learning such as Lp norm based slowness regularization and the SlowVAE, and propose a new term based on Brownian motion used in our method, the S-VAE. We empirically evaluate these slowness regularization terms with respect to their downstream task performance and data efficiency in state estimation and behavioral cloning tasks. We find that slow representations show great performance improvements in settings where only sparse labeled training data is available. Furthermore, we present a theoretical and empirical comparison of the discussed slowness regularization terms. Finally, we discuss how the Fréchet Inception Distance (FID), commonly used to determine the generative capabilities of GANs, can predict the performance of trained models in supervised downstream tasks.
|Pages:||2297 - 2315|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
Open Access funding provided by Aalto University. This project was partially funded by the Human Brain Project Second Specific Grant Agreement (SGA) 2, project number 680093.
© The Author(s) 2023. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.