Meta-Sim 2
Unsupervised Learning of Scene Structure for Synthetic Data Generation

Jeevan Devaranjan*1,3
Amlan Kar* 1,2,4
Sanja Fidler1,2,4

1NVIDIA
2University of Toronto
3University of Waterloo
4Vector Institute
ECCV 2020



Procedural models are being widely used to synthesize scenes for graphics, gaming, and to create (labeled) synthetic datasets for ML. In order to produce realistic and diverse scenes, a number of parameters governing the procedural models have to be carefully tuned by experts. These parameters control both the structure of scenes being generated (e.g. how many cars in the scene), as well as parameters which place objects in valid configurations. Meta-Sim aimed at automatically tuning parameters given a target collection of real images in an unsupervised way. In Meta-Sim2, we aim to learn the scene structure in addition to parameters, which is a challenging problem due to its discrete nature. Meta-Sim2 proceeds by learning to sequentially sample rule expansions from a given probabilistic scene grammar. Due to the discrete nature of the problem, we use Reinforcement Learning to train our model, and design a feature space divergence between our synthesized and target images that is key to successful training. Experiments on a real driving dataset show that, without any supervision, we can successfully learn to generate data that captures discrete structural statistics of objects, such as their frequency, in real images. We also show that this leads to downstream improvement in the performance of an object detector trained on our generated dataset as opposed to other baseline simulation methods.

* denotes equal contribution. Work done during JD's internship at NVIDIA



News



Paper

Jeevan Devaranjan*, Amlan Kar*, Sanja Fidler

Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation

ECCV, 2020

[Preprint]
[Bibtex]


Presentation Video

Please checkout our presentation video for a walk-through of the method and results


Qualitative Results


Input Prob. Grammar Meta-Sim2 KITTI Dataset

(left) samples from our prob. grammar, (middle) Meta-Sim2’s corresponding samples, (right) random samples from KITTI. Notice how the model learns to generate diverse scene structures by adding vegetation, people, bikes and even road signs, making it emulate the target dataset better




This webpage template was borrowed from Richard Zhang.