Geometry of Deep Generative Models for Disentangled Representations

Abstract

Deep generative models like variational autoencoders approximate the intrinsic geometry of high dimensional data manifolds by learning low-dimensional latent-space variables and an embedding function. The geometric properties of these latent spaces has been studied under the lens of Riemannian geometry; via analysis of the non-linearity of the generator function. In new developments, deep generative models have been used for learning semantically meaningful `disentangled' representations; that capture task relevant attributes while being invariant to other attributes. In this work, we explore the geometry of popular generative models for disentangled representation learning. We use several metrics to compare the properties of latent spaces of disentangled representation models in terms of class separability and curvature of the latent-space. The results we obtain establish that the class distinguishable features in the disentangled latent space exhibits higher curvature as opposed to a variational autoencoder. We evaluate and compare the geometry of three such models with variational autoencoder on two different datasets. Further, our results show that distances and interpolation in the latent space are significantly improved with Riemannian metrics derived from the curvature of the space. We expect these results will have implications on understanding how deep-networks can be made more robust, generalizable, as well as interpretable.

Publication
Indian Conference on Vision, Graphics and Image Processing (ICVGIP) 2018
Date

Please cite using following bibtex:

@article{Shukla2018GeometryOD,
  title={Geometry of Deep Generative Models for Disentangled Representations},
  author={Ankita Shukla and Shagun Uppal and Sarthak Bhagat and Saket Anand and Pavan K. Turaga},
  journal={Proceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing},
  year={2018}
}