Skip to content

This repository provides code for "Polyp Segmentation via Semantic Enhanced Perceptual Network" IEEE TCSVT-2024.

Notifications You must be signed in to change notification settings

wangtong627/SEPNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SEPNet (IEEE TCSVT 2024)

  • This repository provides code for "Polyp Segmentation via Semantic Enhanced Perceptual Network" IEEE TCSVT-2024.
  • Our paper is available at here.
  • If you have any questions about our paper, feel free to contact me.

Authors: Tong WangXiaoming Qi & Guanyu Yang.

News

  • [Aug/16/2024] We have open-sourced the model's weight and prediction results.
  • [Jul/24/2024] Our paper has been accepted by IEEE Transactions on Circuits and Systems for Video Technology (IEEE TCSVT).

Overview

Introduction

Accurate polyp segmentation is crucial for precise diagnosis and prevention of colorectal cancer. However, precise polyp segmentation still faces challenges, mainly due to the similarity of polyps to their surroundings in terms of color, shape, texture, and other aspects, making it difficult to learn accurate semantics.

To address this issue, we propose a novel semantic enhanced perceptual network (SEPNet) for polyp segmentation, which enhances polyp semantics to guide the exploration of polyp features. Specifically, we propose the Polyp Semantic Enhancement (PSE) module, which utilizes a coarse segmentation map as a basis and selects kernels to extract semantic information from corresponding regions, thereby enhancing the discriminability of polyp features highly similar to the background. Furthermore, we design a plug-and-play semantic guidance structure for the PSE, leveraging accurate semantic information to guide scale perception and context fusion, thereby enhancing feature discriminability. Additionally, we propose a Multi-scale Adaptive Perception (MAP) module, which enhances the flexibility of receptive fields by increasing the interaction of information between neighboring receptive field branches and dynamically adjusting the size of the perception domain based on the contribution of each scale branch. Finally, we construct the Contextual Representation Calibration (CRC) module, which calibrates contextual representations by introducing an additional branch network to supplement details.

Extensive experiments demonstrate that SEPNet outperforms 15 sota methods on five challenging datasets across six standard metrics.

Qualitative Results

Figure: Qualitative Results.

Code

The structure of the project is as follows.

- checkpoint
- lib
    - backbones
        - efficientnet.py
        - efficientnet_utils.py
        - __init__.py
        - pvtv2.py
        - res2net.py
        - resnet.py
    - __init__.py
    - model_for_eval.py
    - model_for_train.py
    - modules
        - cbr_block.py
        - crc_module.py
        - encoder.py
        - get_logit.py
        - __init__.py
        - map_module.py
        - pse_module.py
- measure
    - eval_list.py
    - eval_metrics.py
    - __init__.py
    - metric.py
- myTest.py
- myTrain.py
- result
- utils
    - dataloader.py
    - __init__.py
    - loss_func.py
    - trainer_for_six_logits.py
    - utils.py

Dataset

  • downloading testing dataset, which can be found in this Google Drive Link (327.2MB). It contains five sub-datsets: CVC-300 (60 test samples), CVC-ClinicDB (62 test samples), CVC-ColonDB (380 test samples), ETIS-LaribPolypDB (196 test samples), Kvasir (100 test samples).
  • downloading training dataset, which can be found in this Google Drive Link (399.5MB). It contains two sub-datasets: Kvasir-SEG (900 train samples) and CVC-ClinicDB (550 train samples).

Weight

  • During training, it is necessary to load the pre-trained parameters of the backbone network, and the weights of PVT-V2-B2 can be downloaded from here.
  • You can also choose to directly load our trained model weights for direct inference, the weight of our proposed SEPNet can be downloaded at here.

Prediction Results

  • You can also directly download our prediction results for evaluation. The prediction map of our proposed SEPNet can be downloaded at here.

Citation

@article{wang2024polyp,
title={Polyp Segmentation via Semantic Enhanced Perceptual Network},
author={Wang, Tong and Qi, Xiaoming and Yang, Guanyu},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2024},
publisher={IEEE}
}

About

This repository provides code for "Polyp Segmentation via Semantic Enhanced Perceptual Network" IEEE TCSVT-2024.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages