Releases: renan-siqueira/autoencoder-research
Docker support and new models (GPU fixing)
Autoencoder Project - Release 2.1.3
Release Date: October 16, 2023.
Differences from the previous version
- Changing the
Dockerfile
to allow GPU use;
v2.1.x
Highlights:
- Docker integration, offering a seamless and reproducible way to set up the project environment across different systems.
- Addition of new autoencoder models, enhancing the diversity of the architectures available to the users.
- Auxiliary scripts for easy dataset management, including resetting the dataset and random image copying.
Features:
- Docker Support: With the inclusion of a
Dockerfile
, users can easily containerize the project, ensuring consistency across various platforms and eliminating the "works on my machine" problem. - Extended Autoencoder Models: Further expanding on the available architectures, this release introduces additional autoencoder models, catering to advanced use-cases and research requirements.
- Dataset Management Tools: Two new scripts simplify dataset handling:
- Resetting the dataset: Deletes and reconstructs the dataset structure.
- Random image copier: Facilitates copying random images from a source folder to the dataset.
Enhancements:
- Docker implementation ensures a consistent environment setup, eliminating potential discrepancies arising from different system configurations.
- Newly added autoencoder models are integrated seamlessly into the existing codebase, making the training process straightforward.
- Modular design continues to be a priority, ensuring that the project remains scalable and easy to understand.
Usage:
-
Clone the repository and navigate to the project directory.
-
To set up the Docker environment:
- Build the Docker image:
docker build -t autoencoder_project .
- Run the Docker container:
docker run -it --rm -v $(pwd):/app autoencoder_project bash
- Build the Docker image:
-
Install the necessary dependencies using
pip install -r requirements.txt
(if not using Docker). -
Adjust data paths and settings in
settings/settings.py
based on your dataset. -
Decide on the autoencoder type and adjust the
json/params.json
file. -
Run the main script with
python run.py
. -
Utilize the dataset management scripts as needed for resetting the dataset or copying random images.
-
Post-training, visualize the reconstructed results displayed, which will now cater to the chosen autoencoder type.
Docker support and new models (bugfix)
Autoencoder Project - Release 2.1.2
Release Date: October 16, 2023.
Differences from the previous version
- bug fix with dataset path for copying files by running
copy_randomic_files.py
.
v2.1.x
Highlights:
- Docker integration, offering a seamless and reproducible way to set up the project environment across different systems.
- Addition of new autoencoder models, enhancing the diversity of the architectures available to the users.
- Auxiliary scripts for easy dataset management, including resetting the dataset and random image copying.
Features:
- Docker Support: With the inclusion of a
Dockerfile
, users can easily containerize the project, ensuring consistency across various platforms and eliminating the "works on my machine" problem. - Extended Autoencoder Models: Further expanding on the available architectures, this release introduces additional autoencoder models, catering to advanced use-cases and research requirements.
- Dataset Management Tools: Two new scripts simplify dataset handling:
- Resetting the dataset: Deletes and reconstructs the dataset structure.
- Random image copier: Facilitates copying random images from a source folder to the dataset.
Enhancements:
- Docker implementation ensures a consistent environment setup, eliminating potential discrepancies arising from different system configurations.
- Newly added autoencoder models are integrated seamlessly into the existing codebase, making the training process straightforward.
- Modular design continues to be a priority, ensuring that the project remains scalable and easy to understand.
Usage:
-
Clone the repository and navigate to the project directory.
-
To set up the Docker environment:
- Build the Docker image:
docker build -t autoencoder_project .
- Run the Docker container:
docker run -it --rm -v $(pwd):/app autoencoder_project bash
- Build the Docker image:
-
Install the necessary dependencies using
pip install -r requirements.txt
(if not using Docker). -
Adjust data paths and settings in
settings/settings.py
based on your dataset. -
Decide on the autoencoder type and adjust the
json/params.json
file. -
Run the main script with
python run.py
. -
Utilize the dataset management scripts as needed for resetting the dataset or copying random images.
-
Post-training, visualize the reconstructed results displayed, which will now cater to the chosen autoencoder type.
Docker support and new models (settings updated)
Autoencoder Project - Release 2.1.1
Release Date: October 16, 2023.
Differences from the previous version (v2.1.0)
- Improved version of
settings/settings.py
- Refactoring the file for a smarter copy
copy_randomic_file.py
v2.1.x
Highlights:
- Docker integration, offering a seamless and reproducible way to set up the project environment across different systems.
- Addition of new autoencoder models, enhancing the diversity of the architectures available to the users.
- Auxiliary scripts for easy dataset management, including resetting the dataset and random image copying.
Features:
- Docker Support: With the inclusion of a
Dockerfile
, users can easily containerize the project, ensuring consistency across various platforms and eliminating the "works on my machine" problem. - Extended Autoencoder Models: Further expanding on the available architectures, this release introduces additional autoencoder models, catering to advanced use-cases and research requirements.
- Dataset Management Tools: Two new scripts simplify dataset handling:
- Resetting the dataset: Deletes and reconstructs the dataset structure.
- Random image copier: Facilitates copying random images from a source folder to the dataset.
Enhancements:
- Docker implementation ensures a consistent environment setup, eliminating potential discrepancies arising from different system configurations.
- Newly added autoencoder models are integrated seamlessly into the existing codebase, making the training process straightforward.
- Modular design continues to be a priority, ensuring that the project remains scalable and easy to understand.
Usage:
-
Clone the repository and navigate to the project directory.
-
To set up the Docker environment:
- Build the Docker image:
docker build -t autoencoder_project .
- Run the Docker container:
docker run -it --rm -v $(pwd):/app autoencoder_project bash
- Build the Docker image:
-
Install the necessary dependencies using
pip install -r requirements.txt
(if not using Docker). -
Adjust data paths and settings in
settings/settings.py
based on your dataset. -
Decide on the autoencoder type and adjust the
json/params.json
file. -
Run the main script with
python run.py
. -
Utilize the dataset management scripts as needed for resetting the dataset or copying random images.
-
Post-training, visualize the reconstructed results displayed, which will now cater to the chosen autoencoder type.
Docker support and new models
Autoencoder Project - Release 2.1.0
Release Date: October 16, 2023.
Highlights:
- Docker integration, offering a seamless and reproducible way to set up the project environment across different systems.
- Addition of new autoencoder models, enhancing the diversity of the architectures available to the users.
- Auxiliary scripts for easy dataset management, including resetting the dataset and random image copying.
Features:
- Docker Support: With the inclusion of a
Dockerfile
, users can easily containerize the project, ensuring consistency across various platforms and eliminating the "works on my machine" problem. - Extended Autoencoder Models: Further expanding on the available architectures, this release introduces additional autoencoder models, catering to advanced use-cases and research requirements.
- Dataset Management Tools: Two new scripts simplify dataset handling:
- Resetting the dataset: Deletes and reconstructs the dataset structure.
- Random image copier: Facilitates copying random images from a source folder to the dataset.
Enhancements:
- Docker implementation ensures a consistent environment setup, eliminating potential discrepancies arising from different system configurations.
- Newly added autoencoder models are integrated seamlessly into the existing codebase, making the training process straightforward.
- Modular design continues to be a priority, ensuring that the project remains scalable and easy to understand.
Usage:
-
Clone the repository and navigate to the project directory.
-
To set up the Docker environment:
- Build the Docker image:
docker build -t autoencoder_project .
- Run the Docker container:
docker run -it --rm -v $(pwd):/app autoencoder_project bash
- Build the Docker image:
-
Install the necessary dependencies using
pip install -r requirements.txt
(if not using Docker). -
Adjust data paths and settings in
settings/settings.py
based on your dataset. -
Decide on the autoencoder type and adjust the
json/params.json
file. -
Run the main script with
python run.py
. -
Utilize the dataset management scripts as needed for resetting the dataset or copying random images.
-
Post-training, visualize the reconstructed results displayed, which will now cater to the chosen autoencoder type.
New Arquitectures and features
Autoencoder Project - Release 2.0.0
Release Date: October 13, 2023.
Highlights:
- Expanded model architectures, introducing Convolutional Autoencoder and Variational Autoencoder, including a combination of both.
- Implementation of checkpointing functionality, providing an advanced and seamless way to save and continue model training.
- Enhanced evaluation and visualization mechanisms to cater to different autoencoder architectures.
Features:
- Diverse Model Architectures: Users now have the flexibility to train not just a simple Autoencoder but also a Convolutional Autoencoder, a Variational Autoencoder, and a Convolutional Variational Autoencoder.
- Checkpointing: Advanced training control with checkpointing, allowing users to save intermediate states of training and resume from them whenever required.
- Enhanced Visualization: With the introduction of new models, visualization capabilities have been expanded to provide a clearer understanding of how different architectures perform.
- General Code Improvements: Refactoring for cleaner code, optimized imports, and better modularization.
Enhancements:
- Extended
run.py
to detect and handle different autoencoder architectures seamlessly. - Updated
trainer.py
to handle the training nuances of the newly introduced autoencoder architectures. - Modular design ensures easy extensibility to accommodate more sophisticated models in the future.
Usage:
- Clone the repository and navigate to the project directory.
- Install the necessary dependencies using
pip install -r requirements.txt
. - Adjust data paths and settings in
settings/settings.py
based on your dataset. - Decide on the autoencoder type and adjust the
main
method inrun.py
. - Run the main script with
python run.py
. - Post-training, visualize the reconstructed results displayed, which will now cater to the chosen autoencoder type.
- Utilize the checkpointing feature to save intermediate training states and resume from them whenever required.
Initial Release
Autoencoder Project - Release 1.0.0
Release Date: October 13, 2023.
Highlights:
- Introduction of an end-to-end autoencoder training pipeline for 64x64 images.
- Efficient data handling using PyTorch's DataLoader for streamlined batching and preprocessing.
- Visual reconstruction comparison, showcasing original, encoded, and decoded images side-by-side.
Features:
- End-to-end Training: Seamlessly load data, train an autoencoder model, evaluate its performance, and visualize its reconstructions with a simple command.
- Modular Structure: Organized structure with separate modules for model definitions, data loading, and training utilities, making the project expandable and maintainable.
- Visualization Capabilities: After training, the model's capability to encode and decode is demonstrated with a visual comparison of original and reconstructed images.
- Model Saving & Loading: Easily save trained model weights to a file and reload them for later use, avoiding the need to retrain frequently.
Usage:
- Clone the repository and navigate to the project directory.
- Install the necessary dependencies using
pip install -r requirements.txt
. - Adjust data paths and settings in
settings/settings.py
based on your dataset. - Run the main script with
python run.py
. - Post-training, visualize the reconstructed results displayed, showcasing original, encoded, and decoded images.
- Trained models are saved automatically to a predefined path for future usage.