If you use this code, please cite:
Faisal Mahmood, Daniel Borders, Richard Chen, Gregory N. McKay, Kevan J. Salimian, Alexander Baras, and Nicholas J. Durr. "Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images." arXiv preprint arXiv:1810.00236 (2018). arXiv Link Accepted to IEEE Transactions on Medical Imaging (In Press).
- Linux (Tested on Ubuntu 16.04)
- NVIDIA GPU (Tested on Nvidia P100 using Google Cloud)
- CUDA CuDNN (CPU mode and CUDA without CuDNN may work with minimal modification, but untested)
- Pytorch>=0.4.0
- torchvision>=0.2.1
- dominate>=2.3.1
- visdom>=0.1.8.3
All image pairs must be 256x256 and paired together in 512x256 images. '.png' and '.jpg' files are acceptable. To avoid domain adpatation issues, sparse stain normalization is recommended for all test and train data, we used this tool. Data needs to be arranged in the following order:
SOMEPATH
└── Datasets
└── XYZ_Dataset
├── test
└── train
To train a model:
python train.py --dataroot <datapath> --name NU_SEG --gpu_ids 0 --display_id 0
--lambda_L1 70 --niter 200 --niter_decay 200 --pool_size 64 --loadSize 256 --fineSize 256
- To view training losses and results, run
python -m visdom.server
and click the URL http://localhost:8097. For cloud servers replace localhost with your IP. - To epoch-wise intermediate training results,
./checkpoints/NU_SEG/web/index.html
To test the model:
python test.py --dataroot <datapath> --name NU_SEG --gpu_ids 0 --display_id 0
--loadSize 256 --fineSize 256
- The test results will be saved to a html file here:
./results/NU_SEG/test_latest/index.html
. - Pretrained models can be downloaded here. Place the pretrained model in
./checkpoints/NU_SEG
. This model was trained after sparse stain normalization, all test images should be normalized for best results, see the Dataset section for more information.
- Please open new threads or report issues to FaisalMahmood@bwh.harvard.edu
- Immidiate responce to minor issues may not be available.
This project is licensed under the MIT License - see the LICENSE.md file for details
- This code is inspired by pytorch-DCGAN, pytorch-CycleGAN-and-pix2pix and SNGAN-Projection
- Subsidized computing resources were provided by Google Cloud.
If you find our work useful in your research please consider citing our paper:
@inproceedings{mahmood2018adversarial,
title = {Adversarial Training for Multi-Organ Nuclei Segmentation in Computational Pathology Images},
author = {Faisal Mahmood, Daniel Borders, Richard Chen, Gregory McKay, Kevan J. Salimian, Alexander Baras, and Nicholas J. Durr},
booktitle = {IEEE Transactions on Medical Imaging},
year = {2018}
}