Challenging common assumptions in the unsupervised learning of disentangled representations
Challenging common assumptions in the unsupervised learning of disentangled representations Locatello et al., ICML’19
Today’s paper choice won a best paper award at ICML’19. The ‘common assumptions’ that the paper challenges seem to be: “unsupervised learning of disentangled representations is possible, and useful!”
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we provide a sober look at recent progress in the field and challenge some common assumptions.
What exactly is a ‘disentangled representation’ and why might we want one?
Put the ‘disentangled’ part to one side for a moment, and let’s start out by revisiting what we mean by a representation. Given a real-world observation (e.g. of an image or video), a representation
is a transformation of
(typically to a lower dimensional space in order to be useful) that somehow preserves the salient information in the
so that we can still use
to extract useful information about the input (e.g. for building classifiers). As a trivial example, suppose we had real world observations consisting of 1000 points sampled from a Continue reading

