Abstract. We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.
Bibtex:
@INPROCEEDINGS{Lassner:GeneratingPeople:2017, author = {Christoph Lassner and Gerard Pons-Moll and Peter V. Gehler}, title = {A Generative Model for People in Clothing}, year = {2017}, booktitle = {Proceedings of the IEEE International Conference on Computer Vision} }
We provide our trained models for download. Again, they are freely available for academic and non-commercial use (see also license). Click the thumbnails for downloading. They were trained with Tensorflow v1.1 and are meant to be used with our code on github (see next section). Just unzip the archives in the project root directory to use them.
We make the code as well as the datasets available for academic or non-commercial use under the Creative Commons Attribution-Noncommercial 4.0 International license.