In proceedings - IEEE IRC 2023
Authors: Josselin Somerville Roberts, Yoni Gozlan, Paul-Emile Giacomelli, Julia Di
Conventional wheeled robots are unable to traverse scientifically interesting, but dangerous, cave environments. Multi-limbed climbing robot designs, such as ReachBot, are able to grasp irregular surface features and execute climbing motions to overcome obstacles, given suitable grasp locations. To support grasp site identification, we present a method for detecting rock cracks and edges, the SKeleton Intersection Loss (SKIL). SKIL is a loss designed for thin object segmentation that leverages the skeleton of the label. A dataset of rock face images was collected, manually annotated, and augmented with generated data. A new group of metrics, LineAcc, has been proposed for thin object segmentation such that the impact of the object width on the score is minimized. In addition, the metric is less sensitive to translation which can often lead to a score of zero when computing classical metrics such as Dice on thin objects. Our fine-tuned models outperform previous methods on similar thin object segmentation tasks such as blood vessel segmentation and show promise for integration onto a robotic system.
We provide a conda environment. Simply run:
conda env create -f environment.yml
conda activate reachbot
cd mmsegmentation
pip install -e .
(This is quite long and can take up to 20 minutes)
If this does not work, check this link for the exact libraries to install.
We provide this link that contains all the required datasets (including the blood vessels). At the root of this repo (here), create a folder called datasets
and copy paste the content in the link. You will now be able to run all configs.
To recreate the cracks
dataset from you own image follow this guide.
To download our cracks
dataset (already formatted), please use this link. Then make sure to place the content of the dataset in ./datasets/cracks_combined
(this folder should contain ann_dir
and img_dir
).
To download the blood vessels datasets (CHASE DB1
, DRIVE
, HRF
and STARE
) please use mmsegmentation
's guide. You can then use our script to combine the datasets; scripts/dataset_combiner.py
(Make sure to properly rename the images of each dataset for this, see the documentation of the script for more details). you can also generate the deformed datasets by modifying the MODIFIERS
object.
In this section we describe the exact commands to run the same experiments as us and recreate the exact same results. This include even figure like the first figure of the paper. All commands should be run inside the reachbot
environment from ./mmsegmentation
.
To generate the mosaique of images you can use our script:
cd ../scripts/generate_workflow_picture
python split_image.py --input_path <PATH-TO-YOUR-IMAGE>
Figure 2 was drawn and Figures 3 and 4 are images taken from the datasets.
To recreate Figure 5, we provide the script scripts/metrics_comparison.py
. Then run the script with different parameters to recreate the images:
cd ../scripts
python metrics_comparison.py --desired_dice 0.2 --desired_crack_metric_diff 0.2 --output_path metric_images_2
python metrics_comparison.py --desired_dice 0.4 --desired_crack_metric_diff 0.2 --output_path metric_images_4
python metrics_comparison.py --desired_dice 0.6 --desired_crack_metric_diff 0.2 --output_path metric_images_6
Run the following trainings:
python tools/train_repeat.py --num_repeats 20 --config \
paper_configs/vit/cracks/dice.py \
paper_configs/vit/cracks/cl_dice.py \
paper_configs/vit/cracks/skil_dice.py \
paper_configs/vit/cracks/skil_prod.py \
-- --amp
See our Wandb run:
- Combined cracks with ViT-B: link
Run the following trainings:
python tools/train_repeat.py --num_repeats 20 --config \
paper_configs/vit/vessels/dice.py \
paper_configs/vit/vessels/cl_dice.py \
paper_configs/vit/vessels/skil_dice.py \
paper_configs/vit/vessels/skil_prod.py \
-- --amp
See our Wandb run:
- Combined vessels with ViT-B: link
Run the following trainings:
python tools/train_repeat.py --num_repeats 10 --config \
paper_configs/unet/cracks/ce.py \
paper_configs/unet/cracks/cl_dice.py \
paper_configs/unet/cracks/skil_dice.py \
paper_configs/unet/cracks/skil_prod.py \
-- --amp
See our Wandb run:
- Combined cracks dataset with U-Net: link
Run the following trainings:
python tools/train_repeat.py --num_repeats 10 --config \
paper_configs/unet/stare/ce.py \
paper_configs/unet/stare/cl_dice.py \
paper_configs/unet/stare/skil_dice.py \
paper_configs/unet/stare/skil_prod.py \
paper_configs/unet/drive/ce.py \
paper_configs/unet/drive/cl_dice.py \
paper_configs/unet/drive/skil_dice.py \
paper_configs/unet/drive/skil_prod.py \
-- --amp
See our Wandb runs:
Run the following script. It will prompt a menu to choose the augmentation to run, choose the one you want.
cd ../scripts
python test_annotation_modifiers.py
Run the following trainings:
python tools/train_repeat.py --num_repeats 10 --config \
paper_configs/vit/vessels_shifted/dice.py \
paper_configs/vit/vessels_shifted/cl_dice.py \
paper_configs/vit/vessels_shifted/skil_dice.py \
paper_configs/vit/vessels_width/dice.py \
paper_configs/vit/vessels_width/cl_dice.py \
paper_configs/vit/vessels_width/skil_dice.py \
paper_configs/vit/vessels_cropped/dice.py \
paper_configs/vit/vessels_cropped/cl_dice.py \
paper_configs/vit/vessels_cropped/skil_dice.py \
paper_configs/vit/vessels_degraded/dice.py \
paper_configs/vit/vessels_degraded/cl_dice.py \
paper_configs/vit/vessels_degraded/skil_dice.py \
-- --amp
Table VI. is then obtained by devising the entried of Table V. by the entried of Table II. (See paper for more details).
See our Wandb runs: