Skip to content

[Pattern Recognition Letters 2024] Learning De-biased Prototypes for Few-shot Medical Image Segmentation

Notifications You must be signed in to change notification settings

YazhouZhu19/DMAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DMAP

The official implementation of paper: Learning De-biased Prototypes for Few-shot Medical Image Segmentation

Introduction

Prototypical networks have emerged as the dominant method for Few-shot Medical image Segmentation (FSMIS). Despite their success, the commonly used Masked Average Pooling (MAP) approach in prototypical networks computes the mean of the masks, resulting in imprecise and inadequate prototypes that fail to capture the subtle nuances and variations in the data. To address this issue, we propose a simple yet effective module called De-biasing Masked Average Pooling (DMAP) to generate more accurate prototypes from filtered foreground support features. Specifically, our approach introduces a Learnable Threshold Generation (LTG) module that adaptively learns a threshold based on the extracted features from both support and query images, and then choose partial foreground pixels that have larger similarity than the threshold to generate prototypes.

Getting started

Dependencies

Please install following essential dependencies:

dcm2nii
json5==0.8.5
jupyter==1.0.0
nibabel==2.5.1
numpy==1.22.0
opencv-python==4.5.5.62
Pillow>=8.1.1
sacred==0.8.2
scikit-image==0.18.3
SimpleITK==1.2.3
torch==1.10.2
torchvision=0.11.2
tqdm==4.62.3

Data sets and pre-processing

Download:

  1. Combined Healthy Abdominal Organ Segmentation data set
  2. Multi-sequence Cardiac MRI Segmentation data set (bSSFP fold)
  3. Multi-Atlas Abdomen Labeling Challenge

Pre-processing is performed according to Ouyang et al. and we follow the procedure on their github repository.

Training

  1. Compile ./supervoxels/felzenszwalb_3d_cy.pyx with cython (python ./supervoxels/setup.py build_ext --inplace) and run ./supervoxels/generate_supervoxels.py
  2. Download pre-trained ResNet-101 weights vanilla version or deeplabv3 version and put your checkpoints folder, then replace the absolute path in the code ./models/encoder.py.
  3. Run ./script/train.sh

Inference

Run ./script/test.sh

Citation

@article{zhu2024learning,
  title={Learning De-biased prototypes for few-shot medical image segmentation},
  author={Zhu, Yazhou and Cheng, Ziming and Wang, Shidong and Zhang, Haofeng},
  journal={Pattern Recognition Letters},
  year={2024},
  publisher={Elsevier}
}

About

[Pattern Recognition Letters 2024] Learning De-biased Prototypes for Few-shot Medical Image Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published