============================
-
Contrasstive Loss
-
Batch-All-Loss and Batch-Hard-Loss in "In Defense of Triplet Loss in ReID"
-
New Positive Mining Loss based on Fuzzy Clustering
[SOTA on standard metric learning Datasets]
-
first 98 classes as train set and last 98 classes as test set
-
first 98 classes as train set and last 98 classes as test set
-
Stanford-Online
for the experiments, we split 59,551 images of 11,318 classes for training and 60,502 images of 11,316 classes for testing
After downloading all the three data file, you should precess them as above, and put the directionary named DataSet in the project. We provide a script to precess CUB( Deep_Metric/DataSet/split_dataset.py ). The other two are similar, you can modify the script by yourself.
Inceptionn BN network as other metric learning papers do, to save your time, we already download them down and put on my Baidu YunPan.
We also put inception v3 in the Baidu YunPan, the performance of inception v-3 is a little worse(about 1.5% on recall@1 ) than inception BN on CUB/Car datasets.
The download site(https://pan.baidu.com/s/1snmKa1v)
- Computer with Linux or OSX
- PyTorch
To exactly reproduce the result in my paper, please make sure to use the same version of pytorch with me: !!! 0.2.0_3 there are some problem for other version to load the pretrained model of inception-BN.
- For training, an NVIDIA GPU is strongly recommended for speed. CPU is supported but training may be slow.
With our loss based on fussy clustering:
sh run_train.sh
To reproduce other experiments, you can edit the run_train.sh file by yourself.