GAN dissection: visualizing and understanding generative adversarial networks
GAN dissection: visualizing and understanding generative adversarial networks Bau et al., arXiv’18
Earlier this week we looked at visualisations to aid understanding and interpretation of RNNs, today’s paper choice gives us a fascinating look at what happens inside a GAN (generative adversarial network). In addition to the paper, the code is available on GitHub and video demonstrations can be found on the project home page.
We’re interested in GANs that generate images.
To a human observer, a well-trained GAN appears to have learned facts about the objects in the image: for example, a door can appear on a building but not on a tree. We wish to understand how a GAN represents such a structure. Do the objects emerge as pure pixel patterns without any explicit representation of objects such as doors and trees, or does the GAN contain internal variables that correspond to the objects that humans perceive? If the GAN does contain variables for doors and trees, do those variables cause the generation of those objects, or do they merely correlate? How are relationships between objects represented?
The basis for the study is three variants of Progressive GANs trained on LSUN scene datasets. To understand what’s going Continue reading