Autoencoder Project - Release 2.1.3
Release Date: October 16, 2023.
Differences from the previous version
- Changing the
Dockerfile
to allow GPU use;
v2.1.x
Highlights:
- Docker integration, offering a seamless and reproducible way to set up the project environment across different systems.
- Addition of new autoencoder models, enhancing the diversity of the architectures available to the users.
- Auxiliary scripts for easy dataset management, including resetting the dataset and random image copying.
Features:
- Docker Support: With the inclusion of a
Dockerfile
, users can easily containerize the project, ensuring consistency across various platforms and eliminating the "works on my machine" problem. - Extended Autoencoder Models: Further expanding on the available architectures, this release introduces additional autoencoder models, catering to advanced use-cases and research requirements.
- Dataset Management Tools: Two new scripts simplify dataset handling:
- Resetting the dataset: Deletes and reconstructs the dataset structure.
- Random image copier: Facilitates copying random images from a source folder to the dataset.
Enhancements:
- Docker implementation ensures a consistent environment setup, eliminating potential discrepancies arising from different system configurations.
- Newly added autoencoder models are integrated seamlessly into the existing codebase, making the training process straightforward.
- Modular design continues to be a priority, ensuring that the project remains scalable and easy to understand.
Usage:
-
Clone the repository and navigate to the project directory.
-
To set up the Docker environment:
- Build the Docker image:
docker build -t autoencoder_project .
- Run the Docker container:
docker run -it --rm -v $(pwd):/app autoencoder_project bash
- Build the Docker image:
-
Install the necessary dependencies using
pip install -r requirements.txt
(if not using Docker). -
Adjust data paths and settings in
settings/settings.py
based on your dataset. -
Decide on the autoencoder type and adjust the
json/params.json
file. -
Run the main script with
python run.py
. -
Utilize the dataset management scripts as needed for resetting the dataset or copying random images.
-
Post-training, visualize the reconstructed results displayed, which will now cater to the chosen autoencoder type.