Skip to content

txynwpu/Distributional_uncertainty_SOD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Modeling the Distributional Uncertainty for Salient Object Detection Models

CVPR 2023

Abstract

Most of the existing salient object detection (SOD) models focus on improving the overall model performance, without explicitly explaining the discrepancy between the training and testing distributions. In this paper, we investigate a particular type of epistemic uncertainty, namely distributional uncertainty, for salient object detection. Specifically, for the first time, we explore the existing class-aware distribution gap exploration techniques, i.e. long-tail learning, single-model uncertainty modeling and test-time strategies, and adapt them to model the distributional uncertainty for our class-agnostic task. We define test sample that is dissimilar to the training dataset as being “out-of-distribution” (OOD) samples. Different from the conventional OOD definition, where OOD samples are those not belonging to the closed-world training categories, OOD samples for SOD are those break the basic priors of saliency, i.e. center prior, color contrast prior, compactness prior and etc., indicating OOD as being “continuous” instead of being discrete for our task. We’ve carried out extensive experimental results to verify effectiveness of existing distribution gap modeling techniques for SOD, and conclude that both train-time single-model uncertainty estimation techniques and weight-regularization solutions that preventing model activation from drifting too much are promising directions for modeling distributional uncertainty for SOD.

Distributional Uncertainty

Visualization of different types of uncertainty, where aleatoric uncertainty $p(y|x^\star,\theta)$ is caused by the inherent randomness of the data, model uncertainty $p(\theta|D)$ happens when there exists low-density region, leading to multiple solutions within this region, and distributional uncertainty $p(x^\star|D)$ occurs when the test sample $x^\star$ fails to fit in the model based on the training dataset $D$.

Motivation

“OOD” samples for salient object detection. Different from the class-aware tasks, OOD for saliency detection is continuous, which can be defined as attributes that break the basic saliency priors, i.e. center prior, contrast prior, compactness prior, etc. We aim to explore distributional uncertainty estimation for saliency detection.

Environment

Pytorch 1.10.0
Torchvision 0.11.1
Cuda 11.4

Dataset

we use DUTS training dataset to train our models, and use DUTS-test, ECSSD and DUT datasets for evaluation.

Acknowledgement

We summarize many methods (MCDropout, DeepEnsemble, TALT, NorCal, WB, MCP, Energy, SML, GradNorm, ExGrad, TCP, DC, ReAct, CoTTA, CTTA,) and apply them to the SOD task to model distributional uncertainty. Thanks for these awesome and meaningful works.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages