Skip to content

Efesasa0/diffusion-image-generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DDPM From Scratch

Try generating sprites on HF :)

Hugging Face Spaces

Implementation of the Image Generation, Diffusion model adapted from the paper. To test the capability to the most simple case, the model has been trained on the custom "Sprites" dataset from the DeepLearning.ai course.

Below can be seen a sampel from the training data and generated images from DDPM and DDIM generation algorithms. For more specific detail read the relative notebook results trained on 100 epochs with lr=3e-4. For effective training, please use a GPU.

figure

Example generated images

Getting Started

    git clone https://github.com/Efesasa0/diffusion-image-generation.git
    cd diffusion-image-generation
    pip install -r requirements.txt
    python train.py --save_dir_weights 'weights/' --save_periods 5 --inference_outs_dir 'outs/' --inference 'ddpm'

Arguments list

Arguments list

usage: train.py [-h] [--save_dir_weights SAVE_DIR_WEIGHTS] [--save_periods SAVE_PERIODS] [--inference_outs_dir INFERENCE_OUTS_DIR]
                [--inference INFERENCE] [--batch_size BATCH_SIZE] [--epochs EPOCHS] [--lr LR] [--features FEATURES] [--T T]
                [--beta_start BETA_START] [--beta_end BETA_END] [--dataset_name DATASET_NAME]

Train the diffusion model and optionally run generation

options:
  -h, --help            show this help message and exit
  --save_dir_weights SAVE_DIR_WEIGHTS
                        Directory to save the weights from training
  --save_periods SAVE_PERIODS
                        Integer to specify how often the model saves weights. Ex: save every 5 epochs
  --inference_outs_dir INFERENCE_OUTS_DIR
                        Directory to save the outputs from the generation
  --inference INFERENCE
                        Available options: ddpm, ddim
  --batch_size BATCH_SIZE
                        Size of batch to train and learn from
  --epochs EPOCHS       Number of epochs to train on
  --lr LR               Learning rate fro training
  --features FEATURES   Size of the hidden layer from the U-net architecture
  --T T                 The size of steps to reverse within the diffusion process
  --beta_start BETA_START
                        Start of the beta scheduler
  --beta_end BETA_END   End of the beta scheduler
  --dataset_name DATASET_NAME
                        Dataset name to train on

TODO

  • argparse the training script
  • add CIFAR-10, MNIST, CelebA for convenience.

References

Additional References

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors