This repo includes the general template for deep learning training, using PyTorch Lightning library. Use this template to start new deep learning / ML projects.
- Built in requirements
- Examples with CIFAR10
- Badges
- Bibtex
`Project name`
└── main.py # The entry to run the training and evaluation.
└── experiment.py # The Pytorch-Lightning training/evaluation module definition.
└── model # The folder to maintain model definition.
└── __init__.py
└── model1.py
└── model2.py
└── ...
└── utils # The folder to include some useful utility functions and evalutation metrics
└── __init__.py
└── utils.py
└── metrics.py
└── data # The folder to maintain data module.
└── __init__.py
└── dataset1.py
└── dataset2.py
└── ...
└── conf # The folder to maintain the configuration files.
└── setup.yaml
└── exp # The folder to maintain the logging files and model checkpointing.
└── exp1
└── log
└── checkpoint
This template structures the deep learning project into 4 parts:
- Model specification
- Experiment settings
- Training settings
- Logger settings
These 4 parts are to be as independent as possible so that the code is more readable and flexible.
What it does
First, install dependencies
# clone project
git clone https://github.com/YourGithubName/pl-template
# install project
cd pl-template
pip install -r requirements.txt
Next, navigate to any file and run it.
# module folder
# run module (example: mnist as your main contribution)
python run.py --config config/config.yml
model_params:
name: "<name of classification model>"
.
.
.
exp_params:
dataset: "<name of dataset>"
.
.
.
trainer_params:
gpus: 1
. # Other parameters required by the model
.
.
...
@article{YourName,
title={Your Title},
author={Your team},
journal={Location},
year={Year}
}