Skip to content

Latest commit

 

History

History
120 lines (88 loc) · 3.01 KB

README.md

File metadata and controls

120 lines (88 loc) · 3.01 KB


Logo

bbox2SAMgmentation

Convert your boundingbox annotations to instance segmentation annotations.

Report Bug · Request Feature

Table of Contents
  1. Getting Started
  2. Usage
  3. Roadmap

Getting Started

Prerequisites

Installation

  1. Clone the repo

    git clone https://github.com/johannes-p/bbox2samgmentation.git
  2. Setup the virtual environment

    cd bbox2samgmentation
    python -m venv venv
  3. Activate the virtual environment

    • on Windows:
      .\venv\Scripts\activate
    • on Linux/macOS:
      source venv/bin/activate
  4. Install Pytorch following the instructions on the page

  5. Install the remaining dependencies

    pip install -r requirements.txt
  6. Put the Segment Anything Model into the models folder.

(back to top)

Usage

Make sure that the venv is active. If it isn't, activate it as described in the installation instructions.

After putting the images and annotations in the corresponding folders you can run the program in the default configuration using:

python main.py --class_name <name_of_the_annotated_object>

After completion, an annotations.json file is located in the root directory, and the generated mask images are located in the masks folder as well, should they be needed.

⚠ Make sure to try out the --use_bbox flag in case only parts of an object are detected. ⚠

If you want to use a model different from vit_b just specify the path when calling the program:

python main.py -m <path/to/the/pth-file> ...

To further change the default behaviour checkout the options using: python main.py --help

(back to top)

Roadmap

  • Input annotation format support
    • PascalVOC
    • COCO
    • CSV
    • ... ?
  • Multiclass support

(back to top)