Abstract
We tackle the problem disentangling the latent
space of an autoencoder in order to separate labelled attribute information from other characteristic information. This then allows us to change
selected attributes while preserving other information. Our method, matrix subspace projection, is
much simpler than previous approaches to latent
space factorisation, for example not requiring multiple discriminators or a careful weighting among
their loss functions. Furthermore our new model
can be applied to autoencoders as a plugin, and
works across diverse domains such as images or
text. We demonstrate the utility of our method
for attribute manipulation in autoencoders trained
across varied domains, using both human evaluation and automated methods. The quality of
generation of our new model (e.g. reconstruction,
conditional generation) is highly competitive to a
number of strong baselines.