Collaborator: Runchen Hu, Shen Wang, Lifan Wang
This repo is forked from https://github.com/tkarras/progressive_growing_of_gans
We have done some minor changes in order to redo the experiment and achieve descent performace as described by the author.
And we did our experiments either on Runchen's desktop with GTX 1080 and on NYU HPC Prince Tesla-P100. The CIFAR experiment took about 14 hours of training, and the CelebA experiment took another 14 hours.
- Paper (NVIDIA research)
- Paper (arXiv)
- Result video (YouTube)
- Additional material (Google Drive)
- Representative images (
images/representative-images) - High-quality video clips (
videos/high-quality-video-clips) - Huge collection of non-curated images for each dataset (
images/100k-generated-images) - Extensive video of random interpolations for each dataset (
videos/one-hour-of-random-interpolations) - Pre-trained networks (
networks/tensorflow-version) - Minimal example script for importing the pre-trained networks (
networks/tensorflow-version/example_import_script) - Data files needed to reconstruct the CelebA-HQ dataset (
datasets/celeba-hq-deltas) - Example training logs and progress snapshots (
networks/tensorflow-version/example_training_runs)
- Representative images (
All the material, including source code, is made freely available for non-commercial use under the Creative Commons CC BY-NC 4.0 license. Feel free to use any of the material in your own work, as long as you give us appropriate credit by mentioning the title and author list of our paper.
We built our PG-GAN model from CelebA and CIFAR-10 dataset.
CIFAR-10: https://www.cs.toronto.edu/~kriz/cifar.html
CelebA: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
The network would generate fake images during the training process, they are stored in /results directory.