Repository for the paper: Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning
[Paper]
# conda env
conda create --name py36 python=3.6
conda activate py36
conda install pytorch torchvision -c pytorch
pip3 install scipy>=1.3.2
pip3 install tensorboardX>=1.4
pip3 install h5py>=2.9.0
# code
git clone https://github.com/lovelyqian/wave-SAN-CDFSL
cd wave-SAN-CDFSL
We use the mini-Imagenet as the single source dataset, and use cub, cars, places, plantae, ChestX, ISIC, EuroSAT, and CropDisease as novel target datasets.
:(Note results of ChestX, ISIC, EuroSAT, and CropDisease are not reported in the paper. For your convenience, we keep these datasets in the code.
For the mini-Imagenet, cub, cars, places, and plantae, we refer to the FWT repo.
For the ChestX, ISIC, EuroSAT, and CropDisease, we refer to the BS-CDFSL repo.
recommend: use the pretrained ckp by FWT.
Or, you can pretrain it:
python3 network_train.py --dataset miniImagenet --stage pretrain --name your-exp-pretrain --train_aug --stop_epoch 400 --save_freq 100
take 5-way 1-shot as an example:
python3 network_train.py --dataset miniImagenet --stage metatrain --name your-exp-name --train_aug --warmup baseline --n_shot 1 --stop_epoch 200 --save_freq 100
the --warmup
can be replaced by your-exp-pretrain
after the meta-train is done, the script will automatically perform the inference.
our meta-trained ckps can be found in wave-SAN 1shot and wave-SAN 5shot
if you wish to test on specific target dataset with specific model:
python3 test_function.py --dataset cub --name your-exp-name --n_shot 1
If you find our paper or this code useful for your research, please considering cite us (●°u°●)」:
@inproceedings{fu2023styleadv,
title={StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning},
author={Fu, Yuqian and Xie, Yu and Fu, Yanwei and Jiang, Yu-Gang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24575--24584},
year={2023}
}
Also, we have published StyleAdv(CVPR23) which outperforms wave-SAN by generating both "virtual" and "hard" styles via adversarial style attack. [Paper], [Code], [Presentation Video on Bilibili]
@inproceedings{fu2023styleadv,
title={StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning},
author={Fu, Yuqian and Xie, Yu and Fu, Yanwei and Jiang, Yu-Gang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24575--24584},
year={2023}
}
We also have works meta-FDMixup, Me-D2N, TGDM which tackles CD-FSL with few labeled target examples.
Our code is built upon the implementation of FWT and ATA. Thanks for their work.