Convert your boundingbox annotations to instance segmentation annotations.
Report Bug
·
Request Feature
Table of Contents
- Python
- Segment Anything Model
- the program defaults to "vit_b", if you would prefer to use a different model size that works as well
-
Clone the repo
git clone https://github.com/johannes-p/bbox2samgmentation.git
-
Setup the virtual environment
cd bbox2samgmentation python -m venv venv
-
Activate the virtual environment
- on Windows:
.\venv\Scripts\activate
- on Linux/macOS:
source venv/bin/activate
- on Windows:
-
Install Pytorch following the instructions on the page
-
Install the remaining dependencies
pip install -r requirements.txt
-
Put the Segment Anything Model into the models folder.
Make sure that the venv is active. If it isn't, activate it as described in the installation instructions.
After putting the images and annotations in the corresponding folders you can run the program in the default configuration using:
python main.py --class_name <name_of_the_annotated_object>
After completion, an annotations.json
file is located in the root directory, and the generated mask images are located in the masks
folder as well, should they be needed.
⚠ Make sure to try out the --use_bbox
flag in case only parts of an object are detected. ⚠
If you want to use a model different from vit_b just specify the path when calling the program:
python main.py -m <path/to/the/pth-file> ...
To further change the default behaviour checkout the options using:
python main.py --help
- Input annotation format support
- PascalVOC
- COCO
- CSV
- ... ?
- Multiclass support