Reading: Learning from Simulated and Unsupervised Images through Adversarial

Fangda Han
2 min readMay 5, 2020

https://arxiv.org/abs/1612.07828

Motivation

Learn a model that improves the realism of synthetic images from a simulator using unlabeled real data, while preserving the annotation information.

Methodology

  • Input:
  1. Synthetic images from simulator (which looks unreal)
  2. Unlabeled real images
  • Output: Refined images (which looks real)
  • Structure: a Refiner R that is an autoencoder, a Discriminator D to help distinguish real and fake
  • Loss
Two loss terms
l_{real} is GAN loss
l_{reg} is L1 loss in image/feature level. If in the feature level, it is similar idea as perceptual loss¹.
  • Tricks
Local Adversarial Loss, also used in Pix2Pix² later
Experience Replay, also widely used in reinforcement rearning.

Datasets

  1. MPIIGaze dataset
  2. NYU hand pose dataset of depth images

Evaluation Metrics

  1. Precision on pretrained-classifier

[1] https://arxiv.org/pdf/1603.08155.pdf

[2] https://arxiv.org/abs/1611.07004

--

--