The goal of this Google Colab notebook is to capture the distribution of Steam banners and sample with a StyleGAN.
- Acquire the data, e.g. as a snapshot called
128x128.zip
in another of my repositories, - Follow the instructions to edit
train.py
in the official StyleGAN Github repository, - Run
StyleGAN.ipynb
to train a StyleGAN. - To resume training from a checkpoint, you will have to edit
training/training_loop.py
.
NB: You might have to edit metrics/frechet_inception_distance.py
to retrieve the network inception_v3_features.pkl
locally if it cannot be downloaded from Google Colab.
The dataset consists of 31,723 Steam banners with RGB channels and resized from 460x215 to 128x128 resolution.
Pre-processed data, as .tfrecords
files, can be downloaded from Google Drive.
A StyleGAN model was trained on 3,524,000 images, with a decreasing mini-batch size, which is about 111 epochs. A checkpoint of the network can be downloaded from Google Drive.
Caveat: training was manually stopped after roughly 1 day, using 1 Tesla K80 GPU in the cloud. Based on the expected training times for 1024x1024, 512x512 and 256x256 images, 9 days of computation time might be required to get the best results for 128x128 images.
Results obtained with different numbers of images seen during training are shown on the Wiki.
A grid of generated Steam banners after 3,524 kimg:
- StyleGAN2:
- StyleGAN:
- DCGAN: