Skip to content

Docker support and new models (GPU fixing)

Latest
Compare
Choose a tag to compare
@renan-siqueira renan-siqueira released this 16 Oct 20:49
· 6 commits to main since this release
fe0af2e

Autoencoder Project - Release 2.1.3

Release Date: October 16, 2023.


Differences from the previous version

  • Changing the Dockerfile to allow GPU use;

v2.1.x

Highlights:

  • Docker integration, offering a seamless and reproducible way to set up the project environment across different systems.
  • Addition of new autoencoder models, enhancing the diversity of the architectures available to the users.
  • Auxiliary scripts for easy dataset management, including resetting the dataset and random image copying.

Features:

  • Docker Support: With the inclusion of a Dockerfile, users can easily containerize the project, ensuring consistency across various platforms and eliminating the "works on my machine" problem.
  • Extended Autoencoder Models: Further expanding on the available architectures, this release introduces additional autoencoder models, catering to advanced use-cases and research requirements.
  • Dataset Management Tools: Two new scripts simplify dataset handling:
    • Resetting the dataset: Deletes and reconstructs the dataset structure.
    • Random image copier: Facilitates copying random images from a source folder to the dataset.

Enhancements:

  • Docker implementation ensures a consistent environment setup, eliminating potential discrepancies arising from different system configurations.
  • Newly added autoencoder models are integrated seamlessly into the existing codebase, making the training process straightforward.
  • Modular design continues to be a priority, ensuring that the project remains scalable and easy to understand.

Usage:

  1. Clone the repository and navigate to the project directory.

  2. To set up the Docker environment:

    • Build the Docker image: docker build -t autoencoder_project .
    • Run the Docker container: docker run -it --rm -v $(pwd):/app autoencoder_project bash
  3. Install the necessary dependencies using pip install -r requirements.txt (if not using Docker).

  4. Adjust data paths and settings in settings/settings.py based on your dataset.

  5. Decide on the autoencoder type and adjust the json/params.json file.

  6. Run the main script with python run.py.

  7. Utilize the dataset management scripts as needed for resetting the dataset or copying random images.

  8. Post-training, visualize the reconstructed results displayed, which will now cater to the chosen autoencoder type.