This benchmark includes an image dataset with groundtruth image smoothing results as well as baseline algorithms that can generate competitive edge-preserving smoothing results for a wide range of image contents. The established dataset contains 500 training and testing images with a number of representative visual object categories, while the baseline methods in our benchmark are built upon representative deep convolutional network architectures, on top of which we design novel loss functions well suited for edge-preserving image smoothing. The trained deep networks run faster than most state-of-the-art smoothing algorithms with leading smoothing results both qualitatively and quantitatively.
-
Dataset: Dataset can be downloaded here via google drive.
-
Previous_Methods: Previous state-of-the-art methods code and sample smoothing results can be downloaded here via google drive.
Please download these two zip files and unzip them to the work directory. The complete file structure should be like this.
.
├── Previous_Methods
│ ├── Previous_Methods_Code
│ ├── method_1_sdf
│ ├── method_2_l0
│ ├── method_3_fgs
│ ├── method_4_treefilter
│ ├── method_5_wmf
│ ├── method_6_l1
│ └── method_7_llf
├── dataset
│ ├── compute_WMAE_WRMSE.py
│ ├── gt_images
│ ├── origin_images
│ └── weight_matrix.mat
├── Resnet_tl
├── VDCNN_tf
├── README.md
├── README
- original images: The constructed dataset for edge-preserving smoothing contains 500 natural images named by
0001-0500.png
(0001-0400.png
for training and0401-0500.png
for testing). The original images are located atdataset/origin_images
. - ground truth images: Each image is associated with 14 human-selected smoothing results. We only keep the five most chosen results. The "ground truth" images are located at
dataset/gt_images
and the weight for each gt image is save atdataset/weight_matrix.mat
. For example, the ground truth images of0001.png
are named by0001_1.png--0001_5.png
.
original image sample:
groundtruth images sample:
We proposed two quantitative measures: Weighted Mean Absolute Error (WMAE) and Weighted Root Mean Squared Error (WRMSE). Run dataset/compute_WAME_WRMSE.py
to evaluate your own algorithm performance.
cd dataset
python compute_WMAE_WRMSE.py --result_path=path_to_your_result_folder
- Our code is based on Tensorflow v1.12 in Python 3.6.
- Our code has been tested on Ubuntu 16.04 and Mac.
- CPU is supported. But GPU is preferred for training.
- VDCNN: very deep convolutional neural network. A pretrained model "model-140002" is provided in the checkpoint folder. The test results will be saved at result_VDCNN
cd VDCNN_tf
python main_vdsr_ReguTerm.py
python test.py
- ResNet: residual block based netwrok. A pretrained model "model-345002" is provided in the checkpoint folder. The test results will be saved at result_ResNet
cd Resnet_tl
python main_resnet_delta.py
python test_delta.py
You can set your own training hyperparameters by providing addtional arguments, such as
- iteration number
--iteration_num
- training batch size
--batch_size
- cropped patch size
--patch_size
- stride between training patch
--stride
If you find this benchmark helpful for your research, please consider citing:
To be added
We have selected some state-of-the-art methods including:
-
SD filter: B. Ham, M. Cho, and J. Ponce, “Robust guided image filtering using nonconvex potentials," TPAMI 2017 [paper]
-
L0 smoothing: L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via l0 gradient minimization,” TOG 2011. [paper]
-
Fast Global Smoothing (FGS): D. Min, S. Choi, J. Lu, B. Ham, K. Sohn, and M. N. Do, “Fast global image smoothing based on weighted least squares,” TIP 2014 [paper]
-
Tree Filtering: L. Bao, Y. Song, Q. Yang, H. Yuan, and G. Wang, “Tree filtering: Efficient structure-preserving smoothing with a minimum spanning tree,” TIP 2014 [paper]
-
Weighted Median Filter (WMF): Q. Zhang, L. Xu, and J. Jia, “100+ times faster weighted median filter (wmf),” CVPR 2014 [paper]
-
L1 smoothing: S. Bi, X. Han, and Y. Yu, “An l1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition,” TOG 2015 [paper]
-
Local Laplacian Filter (LLF): S. Paris, S. W. Hasinoff, and J. Kautz, “Local laplacian filters: Edge- aware image processing with a laplacian pyramid.” siggraph 2011 [paper]
Check Previous_Methods/Previous_Methods_Code/test_parameter.m
for the usage of these methods. All are written in Matlab code.
im_smooth = sdfilter(g,u0,f,nei,param.sdfilter.lambda,mu,nu,step,issparse);
im_smooth = L0Smoothing(im,param.l0.lambda,param.l0.kappa);
im_smooth = FGS(im,param.fgs.sigma,param.fgs.lambda);
im_smooth = TreeFilterRGB_Uint8(im,param.treefilter.sigma,param.treefilter.sigma_s);
im_smooth = jointWMF(im,im,param.wmf.r,param.wmf.sigma);
im_smooth = l1flattening(im, l1_param);
im_smooth = lapfilter(im,param.llf.sigma_r,param.llf.alpha,1,'rgb','lin');
Take SD filter as an example. The folder Previous_Methods/method_1_sdf
has contained smoothed results with one specific parameter setting. You can evaluate the performance of SD filter with that parameter setting on our benchmark by running
cd dataset
python compute_WMAE_WRMSE.py --result_path=../Previous_Methods/method_1_sdf/parameters_2
Smoothing high-contrast details while preserving edges is a useful step in many applications. We briefly discuss two applications, including tone mapping and contrast enhancement, by applying the trained ResNet model as the edge-preserving smoothing filter. Please refer to paper for more details
- Tone Mapping
- Contrast Enhancement
This work was partially supported by Hong Kong Research Grants Council under General Research Funds (HKU17209714 and PolyU152124/15E).
We also would like to thank Lida Li, Jin Xiao, Xindong Zhang, Hui Li, Jianrui Cai, Sijia Cai, Hui Zeng, Hongyi Zheng, Wangmeng Xiang, Shuai Li, Runjie Tan, Nana Fan, Kai Zhang, Shuhang Gu, Jun Xu, Lingxiao Yang, Anwang Peng, Wuyuan Xie, Wei Zhang, Weifeng Ge, Kan Wu, Haofeng Li, Chaowei Fang, Bingchen Gong, Sibei Yang and Xiangru Lin for constructing the dataset.